TLDR

AI: @jeremyphoward “It looks like @johnowhitaker & I may have found something crazy: LLMs can nearly perfectly memorise from just 1-2 examples!

Communication: Hugo, go brrrrrrr! (I fixed most of my Hugo issues and then realized I hadn’t added the About page correctly. I Fixed that too.)


AI

Some days, it’s extra hard to turn around and focus on what you must focus on! Today, I got lost scrolling Twitter and zoning out on YouTube. I’m sure my brain absorbed something, but I had other things to do. I did discover that the training I was recently doing might be a complete waste. That’s an over-exaggeration, but the amount of data needed to fine-tune could be massively less, which would significantly minimize the use-case for “Deep Lake” style storage. Meh, I’m too far out on the curve as it is and don’t need to alter for anymore “maybe one-day things will be different” ideas.

Here’s a link to Jeremy Howard and Jonathan Whitaker’s - Can LLMs learn from a single example?

Oh, and here is a song to distract you while reading 🫡

Communication

Taxonomy Thinking