< Back to the archive

Like what you see? Subscribe here and get it every week in your inbox!

Issue #64 - May 24, 2020

Here are the top threads of the week, happy reading!

Top comment by nabilhat

Assume for a moment I'm a bad-faith, nosy employer who reads HN on a Saturday morning. All it takes for me to match up my little stack of current employee's resumes is a person's city of residence, skills, and employment dates. If I'm that kind of employer, that's enough to raise my red flags. If prior employers are named outright, that's a 100% ID. If employment dates are paired with employment location, that's a 100% ID.

I've known employers like this. I've worked for employers like this. Employers are already monitoring social media. Third party services are paid by employers to monitor for staff that might be looking at other jobs. Recruiters make it their mission to know who's looking and what employers are likely to need their services in the near future. This is much of why trust and discretion is the most important asset on both sides of hiring related activities.

Triplebyte burning down their reputation as a recruitment avenue is one thing. Locking job searchers into reputation and livelihood risks inside Triplebyte's own reputation dumpster fire, on the friday before a holiday weekend, during historic unemployment levels, in the middle of a fucking pandemic, is unforgivable. The CEO showing up in person with hamfisted gaslighting (seriously?) in the middle of this self made disaster makes me hope those comments don't get flagged out of future HN search results.

Top comment by Animats

Self-driving cars. Now that the hype is over and the fake-it-til-you-make-it crowd has tanked, there's progress. Slowly, the LIDARs get cheaper, the radars get more resolution, and the software improves.

UE5's rendering approach. They finally figured out how to use the GPU to do level of detail. Games can now climb out of the Uncanny Valley.

The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That's a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.

C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We've known how for a long time.

Electric cars taking over. The Ford F-150 and the Jeep Wrangler are coming out in all-electric forms. That covers much of the macho market. And the electrics will out-accelerate the gas cars without even trying hard.

Utility scale battery storage. It works and is getting cheaper. Wind plus storage plus megavolt DC transmission, and you can generate power in the US's wind belt (the Texas panhandle north to Canada) and transmit it to the entire US west of the Mississippi.

Top comment by userbinator

As a long-time Win32 developer, my only answer to that question is "of course there is!"

The efficiency difference between native and "modern" web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.

Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.

For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II. Even the web stuff at the time was nowhere near as disgusting as it is today --- AJAX was around, websites did use JS, but simple things like webchats still didn't require as much bloat. I lived through that era, so I knew it was possible, but the younger generation hasn't, so perhaps it skews their idea of efficiency.

In terms of improvement, some things are understandable and rational, such as newer video codecs requiring more processing power because they are intrinsically more complex and that complexity is essential to their increase in quality. But other things, like sending a text message or email, most certainly do not. In many ways, software has regressed significantly.

Top comment by mediaman

Slightly different topic, but Google also suspended our company's adwords account, I believe for using keywords related to COVID-19.

The business is an American manufacturer that added capacity to manufacture PPE to make up for the lack of Chinese supply. Since we were supplying direct to the market, the prices of the PPE were in-market from before COVID times, or cheaper. We weren't out to make a killing, just to fill up some manufacturing time and help folks out. We had the equipment, the people, the facility.

However, we didn't have a great way to reach people who needed it - healthcare was not our normal industry - so we decided to put it up on Adwords.

Within 24 hours, the account was suspended. We appealed it (thinking it must have been a mistake), and a month later, they told us they reviewed it and maintained the suspension. We told them we were only promoting PPE to help people in health care find supply and they didn't care. We've never had suspension issues before.

The whole experience left a very negative taste for Google. With their extreme dominance in market share for advertising, they no longer need to cater to customers' needs. (Maybe they care if you're a multimillion dollar customer, but certainly not if you're an everyday SME manfuacturer.) And there's not a lot of alternatives to turn to for that type of advertising. There was no recourse, no discussion, no reasoning. Just the Google blank wall.

We wound up manufacturing lots of it anyway to hospitals in need, but Google actively tried to stop distribution of American-made PPE during the pandemic.

Top comment by jbk

I've been working on VideoLAN (VLC, x264, other...) for most of my professional life.

For a long time, I had other jobs, working at startups around video, and doing VLC at nights and weekends and holidays.

And now, I built a couple of companies around Open Source multimedia, where we do consulting, integration, custom development around software, applications development, licensing, support and so on.

Those companies are now paying around 25 FTE. It's not too bad, but not impressive either...

The employees are working most of their time on improvements of the open source software, working for clients is a minority of their time.

But I've worked a lot, and by a lot, I really mean a fuckton... And the rewards are not big.

Top comment by econcon

There is shortage of 3D printing filament because of virus, so I've been creating filament and selling it.

https://medium.com/endless-filament/make-your-filament-at-ho...

This activity also help recycle waste plastic.

Production cost of filament is $7.5 per 5kg and filament roll has 850 gram filament and can be sold for $20-30 per spool

It's trivial to get the quality right.

You can sell rolls on Amazon, eBay and Etsy or your own Shopify store and use Facebook ads/Google Ads to advertise your website.

Top comment by barrkel

Symbols are pictures too; and they have denser meaning than diagrammatic pictures.

It's not that difficult concepts are easier in visual form. It's that concepts are verbosely described in visual form. Verbosity puts a ceiling on abstraction and makes things explicit, which is why things seem simple for people to whom everything is new (experts, on the other hand, find it harder to see the wood for the collection of tall leafy deciduous plants).

When you need abstraction, you need to compress your representation. You need to replace something big with something small. You replace a big diagram with a reference. It has a name.

Gradually you reinvent symbolic representation in your visual domain.

Visual programming excels in domains that don't need that level of abstraction, that can live with the complexity ceiling. The biggest win is when the output is visual, when you can get WYSIWYG synergies to shorten the conceptual distance between the visual program and the execution.

Visual programming is at its worst when you need to programmatically maniplate the construction of programs. What's the use of building all your database tables in a GUI when you need to construct tables programmatically - you can't script your GUI to do that, you need to learn a new paradigm, whether it's SQL DDL or some more structured construction. So now you've doubled what you need to know, and things aren't so simple.

Top comment by jtolds

Here's the best set up I've used so far:

1) Get the cheapest iPad that supports the Apple Pencil. The 2018 non-Pro was this for me last I looked.

2) Get the Google Jamboard app (not the Jamboard hardware, it is not at all worth it).

3) Share the "jam" with yourself on a different device (a nearby laptop)

4) Screenshare the laptop.

Things I think any virtual whiteboard scheme needs to have:

1) You need to be able to see people's faces! If you can't see the people in the video call, good luck having anything feel natural.

2) See #1 again. Having the laptop drive the video call is important so you can configure it to see everyone's face while you present.

3) being able to use a pen to write and a finger to erase (if you have to open a menu, fail. sadly the jamboard app also gets this wrong though their way overpriced hardware gets it right)

4) ideally you have the ability to have an infinitely scrolling whiteboard. Jamboard doesn't do this, but it's close.

The Jamboard app also works on phones so other people can fairly easily join in and contribute. This scheme has its problems, but holy crap, so many whiteboard apps focus way too much on fancy new widgets and shapes and text and whatever and not enough on getting out of the way.

Top comment by as-j

TL;DNR, use a language your company can support. It doesn't matter how suited to the job a language is, if it's single Engineer or small team, what happens when they move on? How do you support it? Who's on call?

Not Elixir, but a cautionary tale from our Erlang project. ~8 years ago a our IoT backend was written in Erlang, this was the early days of IoT, so sure it made sense as a technology, could scale well, handle all the symmetric sessions, etc. It was a good tool for the job and in theory could scale well.

But, there's always a but. We're a Python shop on the backend and C on embedded devices. Engineers move on, there some lean times, and after a few years there was no one left who could support an Erlang system. So we hired for the position, and it's key infrastructure, but not really a full time project, so we hired a Python+Erland person.

But! Now they're on call 24/7 since the regular on call roster are python people and when the Erlang service goes wrong, it's call the one engineer. So, do you hire 2 or 3 people to support the service? No, you design it out. At the time is was 2018, and IoT services were a plenty, so we could use an existing service and python.

Another way of looking at it, let's say it's a critical service and you need at least 3 people to support it/be on call. If each engineer costs $200k/year, that's $600k/year. Does this new language save you that much over using a more generic and widly known language in the org?

Top comment by mikelevins

Yep, I've used Common Lisp in production. Several times. I'm in the middle of writing a new app with Lispworks right now.

Common Lisp is generally the first thing I think of for any new work I undertake. There are exceptions, but usually my first choice is Common Lisp. It has been for a little over thirty years.

I've often worked in other languages--usually because someone wants to pay me to do so. I usually pick Common Lisp, though, when the choice of tools is up to me, unless some specific requirement dictates otherwise.

The objections you list might be an issue to some extent, but not much of one. Certainly not enough to discourage me from using Common Lisp in a pretty wide variety of projects. I've used it for native desktop apps, for web apps, for system programming, for text editors, for interpreters and compilers. When I worked on an experimental OS for Apple, the compiler and runtime system I used were built in Common Lisp.

I'll use something else when someone pays me enough to do it. I'll use something else because it's clearly the best fit for some specific set of requirements. I'll use it when there isn't a suitable Common Lisp implementation for what I want to do. I'll even use something else just because I want to get to know it better.

I've taken various detours along the way into learning and using various other languages. I like several of them quite a bit. I just don't like them as much as Common Lisp.

The pleasure I take in my work is a significant factor in my productivity. Choosing tools that offer me less joy is a cost I prefer not to bear without good reason. That cost often exceeds the advantage I might realize from using some other language. Not always; but often.

There was once a language I liked even better than Common Lisp. Apple initially called it 'Ralph'. It evolved into Dylan, which, in its present form, I don't like as much as Common Lisp. If I or someone else invested the considerable time and effort needed to write a modern version of Ralph, then I might choose it over Common Lisp.

For now, though, I'll stick with the old favorite. It continues to deliver the goods.