< Back to the archive

Like what you see? Subscribe here and get it every week in your inbox!

Issue #206 - February 19, 2023

Here are the top threads of the week, happy reading!

Top comment by eddieroger

This thread has a lot of language suggestions, but I think you need a fun problem to solve. Pick a toy app and work on it. Learn embedded programming with an Arduino. Write an iPhone app to track when you feed the dog. Make a karaoke app for Android. If you pick a project you'll enjoy, the language won't matter. Use whatever is best for the platform. The fun doesn't come with the language, it comes with seeing your project come to life, or others' reaction to the cool thing you made. Every language sucks and every language is the best depending on who you ask. Don't focus so deeply on that part. Solve cool and fun problems.

I've had a long time dream of getting a broken jukebox and gutting it for a Raspberry Pi, but keeping the external interface in tact. I know next to nothing about hardware engineering, and can't write more than ten lines of Python without needing to Google something, but I finally got a broken juke in need of fixing, a multimeter, and a hard drive full of music ready to go. The fun isn't the Python or learning GPIO, it's that eventually I will have a cool retro jukebox with hardware and software I wrote in my basement.

Top comment by glenngillen

Work and career progression: “always take the interview”.

I reported direct to the CEO at a startup and I told him our largest customer had headhunted me, and I assumed I wasn’t special and they’re probably targeting many of us. He asked me what the process was like and what the offer was, and when I told him I didn’t take the initial call hand he shook his head. “You always take the interview. Absolute worst case is you confirm you’re happy where you are. Median case is you confirm your worth and force me to have a difficult conversation where you’ve got the most leverage. Best case is you find a new better thing. But through it all you’re exercising the muscle of interviewing while you’re at the top of your game. People always let it atrophy and then try and rebuild it when they’re at their low point. Frustrated with where they are. Unexcited about the things they’re working on, and unable to fake talking about them in the overly positive ways they need to. Don’t want until a low point to work out what comes next. Always take the interview”.

Top comment by TheRealDunkirk

Fraud, causing 3rd-parties to insert themselves between companies and freelancers. And then, of course, comes the VC attempt to monopolize this arbitrage, giving way to fewer and fewer options for freelancers to work through.

The entire world is undergoing a shift because of the internet, but I don't see people talking about it. We've had a social system of personal ethics and credibility up till now, but "platforms" create this many-to-many relationship between everyone, none of whom know anything about the other, really. So everyone hides behind legal protections that are becoming more and more onerous. All of this only favors the bigger pockets. So it's a trend right up and past the point the OP is talking about, and will -- can -- only get worse.

I'm sure there's a term for all of the de-personalization, but I don't know what it is. I'm also sure that it's responsible for massive, widespread decline of mental health in society, but I digress.

And I say all of this after having dipped my toe into the consulting racket 25 years ago, working with a boss of a friend who turned out to be a giant asshat. (To wit: He embezzled all the company money on boats and trips.) I have ideas on some software that would greatly benefit some big companies in a particular market, but I know I'd have to work through someone else to make it happen, and I just don't have the energy to do that.

Top comment by janussunaj

(OP here) I wanted to avoid a wall of text, so I'll elaborate here on where I'm coming from.

My complaints with FAANG have to do with perverse incentives that reward nonsensical decisions, poorly thought-out and over-engineered projects, grandiose documents, duplication of work, selective reporting of metrics, etc.

The few times I had a really good manager, a sane environment, and fulfilling work only lasted until the next reorg. It seems like most organizations are either stressful with a lot of adversarial behavior, or have almost nothing to do but depressing busywork. I also find the social aspect lackluster if not downright alienating. I feel at a dead end both in career growth and opportunities to learn on the technical side. I could roll the dice with another team change, but I'm not eager at the prospects.

Most of my work experience is in ML, but I don't want to box myself into that. I find the current hype around generative models insufferable, and the typical ML project today consists of somewhat sloppy Python and a lack of good engineering practices. I'm also tired of the increasingly long and opaque feedback loops (come up with an idea, wait for your giant model to retrain, hope that some metric goes up). I'm still passionate about some aspects (e.g., learning representations, knowledge grounding, sane ML workflows).

I hear that academia has similar issues (though again I mostly know about ML), and I imagine lots of industries have worse conditions than tech. I realize that sloppiness and politics are a fact of life, so I'm wary of falling into the "grass is greener" trap.

Top comment by JonathanBeuys

You don't even need a fancy "send html fragments over the wire" approach to create a better user and developer experience.

Just sending full pages, server side rendered, like Hacker News and Wikipedia do is fine:

Going from the HN homepage to this topic we are on:

    36 KB in 5 requests.
Going from the Wikipedia Homepage to an article:

    824 KB in 25 requests
Going from the AirBnB homepage to an apartment listing:

    11.4 MB in 265 requests.
Going from the Reddit homepage to a Reddit thread:

    3.74 MB in 40 requests
In comparison to AirBnB and Reddit, HN and Wikipedia feel blazingly fast. And I am sure the developer experience is an order of magnitude nicer as well.

Top comment by agentultra

I think there are plenty of people that remain skeptical of their utility for this application.

People who want to get rich will tell you it's the next greatest thing that will revolutionize the industry.

Personally, I've been annoyed at how confidently wrong ChatGPT can be. Even when you point out the error and ask it to correct the mistake it comes back with an even-more-wrong answer. And it frames it like the answer is completely, 100% correct and accurate. Because it's essentially really deep auto-complete, it's designed to generate text that sounds plausible. This isn't useful in a search context when you want to find sources and truth.

I think there are useful applications for this technology but I think we should leave that to the people who understand LLM's best and keep the charlatans out of it. LLM's are really interesting and have come a long way by leaps and bounds... but I don't see how replacing entire institutions and processes by something that is only well understood by a handful of people is a great idea. It's like watering plants with gatorade.

Top comment by pringk02

I use `fish` as my main shell. It's closer to POSIX than eshell or oil, but it's definitely not POSIX.

Personally, the main thing I like about it is that I can't copy paste bash commands. It forces me to read and understand and then convert. It also has some quality of life stuff I really like. I think next time I'm just going to go back to ZSH though as I feel I have learnt enough from the experience of using it that I'm no longer getting new things out of it, and I don't script in fish enough for the nicer syntax to be beneficial enough for me. Honestly, I prefer ZSH variables over fish variables too

Top comment by bee_rider

We can’t even prove other humans are conscience, right? We just assume it because it would be silly to assume we are somehow unique.

I think it will not really be a sharp line, unless we actually manage to find the mechanism behind consciousness and manage to re-implement it (seems unlikely!). Instead, an AI will eventually present an argument that it should be given Sapient Rights, and that will convince enough people that we’ll do it. It will be controversial at first, but eventually we’ll get use to it.

That seems like the real threshold. We’re fine to harming sentient and conscious creatures as long as they are sufficiently delicious or dangerous, after all.

Top comment by andrewmcwatters

If I understand correctly, ChatGPT doesn't have its latent capabilities removed. Instead, they're suppressed by training using negative feedback. These special prompts are supposed to find the remaining stochastic spaces where ChatGPT can process the desired output that is not suppressed by training.

So, the danger seems to be that there is no currently documented way to completely remove these possible outputs, because that's just not how these systems work.

Prompt engineering in this specific usage could be thought of as injection, but from what I understand, there's currently no known sanitization process. In theory one could use the system itself to determine intent and sanitize input this way, but I believe there's a possibility for one to craft intent that is understood by the system, but the intent description itself isn't. This would be akin to bypassing sanitization.

ChatGPT seems to already do some form of this intent processing, either inherently or explicitly. But all prompt crafting at the moment is first based on this injection or jailbreaking to bypass intent sanitization.

Top comment by mauvehaus

Postgresql, hands down. https://www.postgresql.org/docs/

And they do a great job of keeping docs for older versions available. They're on version 15 and they have back to 7.2 available.

If I can't figure out how something works in another DBMS, I usually read the comparable postgresql docs to get a good overview, and then go and re-read the docs for what I'm actually using to fill in the implementation-specific bits.