# Benedict's Newsletter: No. 551 #Omnivore ## Colophon title:: Benedict's Newsletter: No. 551 type:: [[clipped-note]] tags:: [[&omnivore]] Newsletter url:: https://omnivore.app/no_url?q=5e03d387-a512-463f-ab21-e00eafac5fb8 archive:: https://omnivore.app/me/benedict-s-newsletter-no-551-19103f795b1 date:: [[2024-07-30]] ## Highlights tags:: > <mark class="omni omni-yellow">Google gives up on killing cookies</mark> [⤴️](https://omnivore.app/me/benedict-s-newsletter-no-551-19103f795b1#03ec8427-b8fd-43bc-a662-1999912dc091) ^03ec8427 tags:: > <mark class="omni omni-yellow">Google has never been able to solve the stake-holder alignment (i.e cat-herding) of persuading privacy regulators, competition regulators, publishers and ad-tech companies that this is a: a good idea and b: would work. Now it looks like Google is giving up: instead of killing 3P cookies, it will keep them, and “introduce a new experience in Chrome that lets people make an informed choice”. The devil is in the wording, but that sounds like an ‘ask people if they want to block cookies’ button. Now the ad industry is scrambling to work out what that means, and what comes next. [LINK](https://ben-evans.us6.list-manage.com/track/click?u=b98e2de85f03865f1d38de74f&id=cf54b3c8b4&e=bea6d9620d)</mark> [⤴️](https://omnivore.app/me/benedict-s-newsletter-no-551-19103f795b1#6382d7c4-03e4-4ce0-b825-78e3d2f15be4) ^6382d7c4 If it walks like a button and it talks like a button… tags:: > <mark class="omni omni-yellow">#### OpenAI burn rates</mark> <mark class="omni omni-yellow">Accuracy and ranking is one barrier to entry in search - another is just how much money you have, and the Information reports that OpenAI is already on track to burn $5bn this year. I doubt that it will have trouble raising more (and trying to get a share of the firehose of cash that comes from Google search might help), but it’s still a reminder that LLMs have unprecedented capital-intensity, especially for a technology that has yet to find broad product-market fit. [LINK](https://ben-evans.us6.list-manage.com/track/click?u=b98e2de85f03865f1d38de74f&id=e0994dcf27&e=bea6d9620d)</mark> [⤴️](https://omnivore.app/me/benedict-s-newsletter-no-551-19103f795b1#8f49f61a-7251-45b8-9c54-4c14e10e1201) ^8f49f61a Also see Ed Zitron’s post…[[2024-07-29 - How Does OpenAI Survive- - Omnivore]] tags:: > <mark class="omni omni-yellow">LLMs and IPR</mark> [⤴️](https://omnivore.app/me/benedict-s-newsletter-no-551-19103f795b1#9793f452-e7a8-4dbf-90b3-65b47e85bcc1) ^9793f452 tags:: > <mark class="omni omni-yellow">There is a growing collision between the philosophical view in many AI circles that training-by-looking is no different to what people do (after all, these systems aren’t Napster - they can’t generally reproduce what’s in the training data) and the legal status of ‘using’ people’s property in an entirely new way but without any new model for permission. [PERPLEXITY](https://ben-evans.us6.list-manage.com/track/click?u=b98e2de85f03865f1d38de74f&id=136f17d8cd&e=bea6d9620d), [RUNWAY](https://ben-evans.us6.list-manage.com/track/click?u=b98e2de85f03865f1d38de74f&id=2ed599afab&e=bea6d9620d)</mark> [⤴️](https://omnivore.app/me/benedict-s-newsletter-no-551-19103f795b1#b27eec8c-b131-451e-90f7-e7043ac91c04) ^b27eec8c tags:: > <mark class="omni omni-yellow">## Ideas</mark> <mark class="omni omni-yellow">This week’s viral machine learning paper: LLMs collapse when trained ‘indiscriminately’ on data produced by LLMs. This speaks to the ‘model collapse’ problem, but needs to be read with caution, since the word ‘indiscriminately’ is important: this study is based on training that _only_used data output from another model, which is more a proof-of-concept than a realistic scenario. In other words, we can use ‘synthetic data’, but only in some domains, to some degree, with caution. [LINK](https://ben-evans.us6.list-manage.com/track/click?u=b98e2de85f03865f1d38de74f&id=e350ada02b&e=bea6d9620d)</mark> [⤴️](https://omnivore.app/me/benedict-s-newsletter-no-551-19103f795b1#9519f10c-6f95-49da-a6cc-debcc5028449) ^9519f10c tags:: > <mark class="omni omni-yellow">More generally, and linked as an example, Alexis Gallagher wrote a useful discussion of what LLMs might be doing, and whether they are reasoning or pattern-matching. It’s important to remember that we really don’t have a good theoretical model of _why_ LLMs produce such good results, and hence of what would change if we scaled them, used more synthetic data, or anything else. [LINK](https://ben-evans.us6.list-manage.com/track/click?u=b98e2de85f03865f1d38de74f&id=dcdcc8f19a&e=bea6d9620d)</mark> [⤴️](https://omnivore.app/me/benedict-s-newsletter-no-551-19103f795b1#342a7017-4f67-4343-aa93-5b431c1a6913) ^342a7017 [Read on Omnivore](https://omnivore.app/me/benedict-s-newsletter-no-551-19103f795b1) [Read Original](https://omnivore.app/no_url?q=5e03d387-a512-463f-ab21-e00eafac5fb8)