< Back to the archive

Like what you see? Subscribe here and get it every week in your inbox!

Issue #260 - March 3, 2024

If you are looking for work, check out this month's Who is hiring?, Who wants to be hired? and Freelancer? Seeking Freelancer? threads.

Here are the top threads of the week, happy reading!

Top comment by kccqzy

Heard from a friend in China: the age calculation portion of the app to schedule a marriage certificate had a bug where they subtracted 22 (legal minimum age) from the year, which resulted in 2002-02-29 which doesn't exist. The app intends to compare this against the user's birth date. The error handling code assumes all errors are from the comparison. The app then rejected all marriage certificate appointments by complaining that the users are too young to marry legally.

Top comment by dijit

This is something that's easy to have an opinion on so you're going to get buried.

I'll do my best to make a high-signal comment here, but it will be drowned by all the other replies, which also likely touch on these points.

First, "slow-thinking" is really just a different way of expressing your thinking and you should begin by leaning into it rather than leaning away. Take time, allow yourself to pause to collect your thoughts. People often interpret quietness (not filler) as intelligence and maturity (because usually it is). Alternatively not answering is also valid.

Second, as a person who is generally regarded as quick-witted and sharp, most of that sharpness comes from either having a few pre-known responses to ideas or anxiously revising situations in my head ahead of time. This is, generally, a bad thing because it means I have made decisions on how I will respond to things without all of the information (as some will come over during conversation). Methodically thinking things through, fresh, is probably the only realistic way to be open minded.

Finally, do your best to avoid situations where a "quick decision" is needed; this is good advice even for "quick thinkers". Fast decisions are often poor ones - the counterpoint to that is dragging something out over many weeks or across many meetings - but putting yourself in a situation where the unknowns become knowns or the scope of the landscape and weight of the decision can be properly assessed is important. What's better is that you will likely be able to have a better paper trail for this.

One absolutely final piece of advice: Avoid using the word "slow", use "deliberate" instead.

Top comment by shdon

I've been frequenting this site daily for over 15 years now, and I don't think I've had error messages more than a handful of times in that decade and a half. Today was indeed one such time - which is what triggered me to check out this particular post. But at least in my experience it's not what I'd call "often". Maybe you're just very unlucky, or perhaps I'm extremely lucky... Who can tell?

Top comment by weinzierl

A couple of months ago I attended a presentation of an on-prem LLM. An audience member asked, if it was using OpenAI in any way.

The presenter, somewhat overeagerly, "Why not ask our new AI?" and went on to type: "Are you an independent model or do you use OpenAI?"

To chat bot answered in flourish language that sure it was using ChatGPT as a backend. Which it was not and which was kind of the whole point of the presentation.

Top comment by jiehong

Maybe less fun than others, but actually not so bad: leetcode [0]

Otherwise, in the same vein as the SQL murder mystery, you can try the Hanukkah of Data [1].

[0]: https://leetcode.com/problem-list/leetcode-curated-sql-70/

[1]: https://hanukkah.bluebird.sh

Top comment by decafninja

It’s always been recruiters reaching out to me. I can’t see it being much different now.

I’ve never, ever, gotten a human response from any company that I cold applied. Either ignored, or get the automated rejection email.

Recruiters that I could cold contact usually tell me thanks for reaching out, and that they’ll be in touch if they’re interested. They never are.

I could probably get -a- job through networking, but probably not a job I’d be very interested in. I have a network, but very few people work somewhere that I’d like to work at. The few that do would at best guarantee I get fed into the leetcode pipeline and then it would all be on me to leetcode my way in.

I know some master networkers that can get offers and switch jobs via their network at the drop of a hat. Again, the catch here is that they have to be very unpicky. Overall, many of the best places to work are gatekeeped with leetcode.

I’m currently at what I consider an “endgame” company. I wouldn’t mind if this is the last company I work at (I’m 40). If I can last here until I’m late 40s or early 50s, then I’d rather semi-retire instead of putting myself through the leetcode gauntlet again.

Top comment by voidhorse

I read a lot. I used to have time to read basically whatever, and would generally just follow the bibliographic of referential trails in other books. Now, my time is much more limited, so whenever I want/need to learn about something new, I'm more disciplined and devise an "essential reading list" as a first step. To do this, I'll perform a basic google search for the topic, check a wikipedia article, or search for the topic on a publisher's website. I'll read the abstract for each book and try to determine which ones are oriented toward beginners. I'll usually select at least three "beginner books" and read them, along with a couple of select "advanced" books that I know I'll have to wait to get to until later.

I try to read for at least 45 minutes each day and I take notes on the books I read. From there I move on to the more advanced stuff I gathered and use my old habit of following bibliographic references for more.

Umberto Eco has a book on how to write a PHD thesis, How to Write a Thesis . I think a lot of the techniques described in that book are valuable for any kind of research, whether your aim is to write a thesis or just to learn something new.

Top comment by imperialdrive

Life is just hard. I don't know how folks managed 50 years ago, 100 years, 500+ years, it was just our ability to keep pushing through the mud, the cold nights, the loneliness, sometimes letting a glimpse of a beautiful day or sparkling night be enough to keep going.

Someone told me that life and work and relationships are sometimes just pushing everything you have in a giant deep dark hole in the ground, and every now and then, something pops back out. If you're lucky, it makes it all worth it. The only way to know is to keep trying. Maybe you'll give up on all your hard work and be even more depressed, but eventually get a job that doesn't even seem good, but then the days get better and better and one night you feel pretty darn good before bed and your realize all that behind you was just part of the journey, and it's ok. Let yourself feel sad, doubt, fear, get it out, behind you, make room for the good. You will discover it, I'm sure of it. If it's really hard to keep going, then you're on the right path. Life just is not easy! Gl

Top comment by ndjdbdjdbev

Hi!

Sorry to hear about your situation.

If you are in Alberta the Bannf Centre will give you board, cheap food and pay for work (cleaning rooms, preparing food, etc).

I know this is not ideal, but being safe and warm is greater than being in the elements in the winter.

So I will say that my information is about 15 years old here... But I hope it helps...

Top comment by lolinder

There's a misconception in the question that is important to address first: when an LLM is running inference it isn't querying its training data at all, it's just using a function that we created previously (the "model") to predict the next word in a block of text. That's it. When considering plain inference (no web search or document lookup), the decisions that determine a model's speed and capabilities come before the inference step, during the creation of the model.

Building an LLM model consists of defining its "architecture" (an enormous mathematical function that defines the model's shape) and then using a lot of trial and error to guess which "parameters" (constants that we plug in to the function, like 'm' and 'b' in y=mx+b) will be most likely to produce text that resembles the training data.

So, to your question: LLMs tend to perform better the more parameters they have, so larger models will tend to beat smaller models. Larger models also require a lot of processing power and/or time per inferred token, so we do tend to see that better models take more processing power. But this is because larger models tend to be better, not because throwing more compute at an existing model helps it produce better results.