This is amazing. My entire web browser session state for every private and personal website I sign onto every day will be used for training data. It's great! I love this. This is exactly the direction humans should be going in to not self-destruct. The future is looking bright, while the light in our brains dims to eventual darkness. Slowly. Tragically. And for what purpose exactly. So cool.
I have no plans in downloading Atlas either, but I think your browsing isn't used for training unless you opt in.
> By default, we don’t use the content you browse to train our models. If you choose to opt-in this content, you can enable “include web browsing” in your data controls settings. Note, even if you opt into training, webpages that opt out of GPTBot, will not be trained on.
This thing is an absolute security nightmare. The concept of opening up the full context of your authenticated sessions in your email, financial, healthcare or other web sites to ChatGPT is downright reckless. Aside from personal harm, the way they are pushing this is going to cause large scale data breaches at companies that harbour sensitive information. I've been the one pushing against hard blocking AI tools at my org so far but this may have turned me around for OpenAI at least.
no I'm talking about the general concept of having ChatGPT passively able to read sensitive data / browser session state. Apart from the ever present risk they suck your data in for training, the threat of prompt injection or model inversion to steal secrets or execute transactions without your knowledge is extreme.
The comparison with Zork and the later comment about having to "guess" what to input to get a CLI to work we're also bizarre. He's obviously stretching really hard to make the analogy work.
Don't get me wrong, I'm not arguing that expansion of GUI based interfaces wasn't a good thing. There's plenty of things I prefer to interact with that way, and the majority of people wouldn't use computers if CLIs were still the default method. But what he's describing is literally not how any ever used the commandline.
Ask your non-dev peers if they know what the command line is and if they have ever used it or seen especially when most people use the web on their smartphone.
I recently demonstrated ripgrep-all with fzf to search through thousands of PDFs in seconds on the command line and watched my colleague’s mind implode.
The Atlas implementation isn't great, but I'll pick something that tries to represent my interests every time. The modern commercial web is an adversarial network of attention theft and annoyance. Users need something working on their behalf to mine through the garbage to pull out the useful bits. An AI browser is the next logical step after uBlock.
It seems naive to expect a product by a company that desperately needs a lot of revenue to cover even a tiny part of investor money that it burned—where said product offers unprecedented opportunity to productize users in ways never possible before, and said company has previously demonstrated its disregard for ethics—to represent user’s interests.
It’s unlikely LLM operators can break even by charging per use, and it should be expected that they’ll race to capture the market by offering “free” products that in reality are ad serving machines, a time-tested business model that has served Meta and friends very well. The fact that Atlas browser is (and they don’t even hide it) a way to work around usage limits of ChatGPT should ring alarm bells.
Completely agree. Consumers won’t pay for anything online, which means every business model is user-hostile. Use the web for five minutes without an ad blocker and it’s obvious.
Atlas may not be the solution but I love the idea of an LLM that sits between me and the dreck that is today’s web.
Complete unrelated to what you actual wrote about, but... This is the second time in a week that I hear dreck used in english. Before, I never noticed it anywhere. First was from the "singing chemist" scene in Breaking Bad, and now in your writing. I wasn't aware english adopted this word from German as well. Weird, that I never heard it until just now, while the scene I watched is already 15 years or so old...
It's a lot more than that. The U.S. online ad market is something like $400-500 billion, so that's about $100/mo per person. The problem is that some people are worth a lot more to advertise to than others. Someone who uses the internet a lot and has a lot of disposable income might be more like $500+ a month.
Where do I cut the $10/month? No like seriously, I'd easily pay $10/month to never see another ad, cookie banner, dark pattern, or have my information resold again. As long as that $10 is promised to never increase, other than adjustments for inflation.
But I can't actually make that payment - except maybe by purchasing a paid adblocker - where ironically the best open source option (uBlock Origin) doesn't even accept donations.
Setting up Kagi is as big an improvement to search as an ad blocker is to your general internet experience. After about a week you forget how bad the bare experience is, and after a month you'll never go back.
I'm definitely behind some of my peers on adopting LLMs for general knowledge questions and web search, and I wonder if this is why. Kagi does have AI tools, but their search is ad free and good enough that I can usually find what I'm looking for with little fuss.
Ads are not the only problem with the modern web. Accessibility (or, the lack thereof) is more of an issue for me. 20 years ago, we were still hopeful the Internet would bring opportunities to blind people. These days, I am convinced the war has been lost and modern web devs and their willingness to adopt every new nonesense are the people that hurt me most in current society.
One reason I now often go to ChatGPT instead of many Google queries is that the experience is ads free, quick and responsive.
That said, don't be lured, you know they're already working on ways to put ads and trackers and what not inside ChatGPT and Atlas, those 20$ sound won't pay enough to recoup all that investment and cost and maximize profits.
So I think we should be careful what we wish for here.
This is kind of surprising, because those are precisely the ways I would say that a Web search is better than ChatGPT. Google is generally sub second to get to results, and quite frequently either page 1 or 2 will have some relevant results.
With ChatGPT, I get to watch as it processes for an unpredictable amount of time, then I get to watch it "type".
> ads free
Free of ads where ChatGPT was paid to deliver them. Because it was trained on the public Internet, it is full of advertising content.
Update: Example query I just did for "apartment Seville". Google completed in under a second. All the results above the fold are organic, with sponsored way down. Notably the results include purchase, long-term and vacation rental sites. The first 3 are listing sites. There's an interactive map in case I know where I want to go; apartments on the map include links to their websites. To see more links, I click "Next."
ChatGPT (MacOS native app) took ~9 seconds and recommended a single agency, to which it does not link. Below that, it has bullet points that link to some relevant sites, but the links do not include vacation rentals. There are 4 links to apartment sites, plus a link to a Guardian article about Seville cracking down on illegal vacation rentals. To see more links, I type a request to see more.
For all the talk about Google burying the organic links under a flood of ads, ChatGPT shows me far fewer links. As a person who happily pays for and uses ChatGPT daily, I think it's smart to be honest about its strengths and shortcomings.
Google is SEOed a lot. And while apartment sevile is a subset where Google is probably very good. For many things it gives me very bad results e.g. searching for affordable haircut always gives me a yelp link(there is a yelp link for ever adjective + physical storefront SMB).
That being said I've never really come across on what are some good general ways to Google to give me good results.
I know some tricks e.g. Filetype:PDF
, use scholar for academic search, use "site:...". smth like "site:reddit.com/r/Washington quiet cafe" for most things people would want to do in a city, because people generally ask about those things on community forums.
But I have a poor time with dev related queries because 1/2 the time it's seo'd content and when I don't know enough about a subject, LLMs generally gives me a lot of lines of inquiries(be careful of X and also consider Y) that I would not bother to ask cause I don't know what I don't know.
I use google and ChatGPT for totally different reasons - ChatGPT is generally far better for unknown topics, which google is better if I know exactly what I’m after.
If I’m trying to learn about a topic (for example, how a cone brake works in a 4WD winch), then ChatGPT gives me a great overview with “ Can you explain what a cone brake is and how it works, in the context of 4WD winches?” while google, with the search “4wd winch cone brake function explained”, turns up a handful of videos covering winches (not specifically cone brakes) and some pages that mention them without detailing their function. ChatGPT wins here.
If I were trying to book a flight I’d never dream of even trying to use ChatGPT. That sort of use case is a non-starter for me.
This sentiment has been rolling around in my head for a while. I assume one day I'll be using some hosted model to solve a problem, and suddenly I won't be able to get anything out beyond "it would actually work a lot better if you redeployed your application on Azure infra with a bunch of licensed Windows server instances. Here's 15 paragraphs about why.."
Ublock Origin allows me to control what I see while that information is still in its original context so that I can take that into account when doing research, making decisions, etc.
But isn't this, instead, letting a third party strip that context away and give it its own context so that you can't make those choices and decisions properly? Information without context is, to me, nearly worthless.
And even if you believe they are neutral parties and have your own interests at heart (which, quite frankly, I think is naive), once companies like that know everything about you, you don't think they'll abuse that knowledge?
uBlock just takes stuff off of a page that shouldn't be there in the first place. All the content that should be there is still there, unchanged.
An AI browser is choosing to send all the stuff you browse, to a third party without a demonstrated interest in keeping it all private, and getting back stuff that might or might not be true to the original content. Or maybe not even true at all.
Oh and - Atlas will represent your interests, right up until OpenAI decides it's not in their financial interest to do so. What do you do when the entire web browser UI gets enshittified?
Wow! Amazing post! You really nailed the complexities of AI browsers in ways that most people don't think about. I think there's also a doom paradox where if more people search with AI, this disincentives people from posting on their own blog and websites where incentives are usually ads could help support them. If AI is crawling and then spitting back information from your blog (you get no revenue), is there a point to post at all?
I tested Google Search, Google Gemini and Claude with "Taylor Swift showgirl". Gemini and Claude gave me a description plus some links. Both were organized better than the Google search page. If I didn't like the description that Claude or Google gave me I could click on the Wikipedia link. Claude gave me a link to Spotify to listen while Gemini gave me a link to YouTube to watch and lisen.
The complaint about the OpenAI browser seems to be it didn't show any links. I agree, that is a big problem. If you are just getting error prone AI output then it's pretty worthless.
Every professional involved in saas, web , online content creation thinks the web is a beautiful thing.
In reality the fact of social media means web failed long time ago, and it only serves a void not taken by mobile apps , and now llm agents.
Why do I need to read everything about tailor Swift on you her web site , if I don’t know a single song of her ? ( I actually do ) .
I don’t want a screaming website tells me about her best new album ever , and her tours if LLM knows I don’t like pop music . The other way around if you like her you’d like a different set of information. Website can do that for you
This and the new device that OpenAI is working on is more of a general strategy to make a bigger moat by having more of an ecosystem so that people will keep their subscriptions and also get pro.
- Atlas slurps the web to get more training data, bypassing Reddit blocks, Cloudflare blocks, paywalls, etc. It probably enriches the data with additional user signals that are useful.
- Atlas is an attempt to build a sticky product that users won't switch away from. An LLM or image model doesn't really have sticky attachment, but if it starts storing all of your history and data, the switching costs could become immense. (Assuming it actually provides value and isn't a gimmick.)
- Build pillars of an interconnected platform. Key "panes of glass" for digital lives, commerce, sales intent, etc. in the platformization strategy. The hardware play, the social network play -- OpenAI is trying to mint itself as a new "Mag 7", and Atlas could be a major piece in the puzzle.
- Eat into precious Google revenue. Every Atlas user is a decrease in Google search/ads revenue.
> There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
This article is deep, important, and easily misinterpreted. The TL;DR is that a plausible business model for AI companies is centered around surveillance advertising and content gating like Google or Meta, but in a much more insidious and invasive form.
The SV playbook is to create a product, make it indispensable and monopolise it. Microsoft did it with office software. Payment companies want to be monopolies. Social media are of course the ultimate monopolies - network effects mean there is only one winner.
So I guess the only logical next step for Big AI is to destroy the web, once they have squeezed every last bit out of it. Or at least make it dependent on them. Who needs news sites when OpenAI can do it? Why blog - just prompt your BlogLLM with an idea. Why comment on blogs - your agent will do it for you. All while avoid child porn with 97% accuracy - somerhing human curated content surely cannot be trusted to do.
It's really crazy that there is an entire ai generated internet. I have zero clue what the benefit of using this would be to me.Even if we argue that it is less ads and such, that would only be until they garner enough users to start pushing charges. Probably through even more obtrusive ads.
I also need to laugh. Wasn't open AI just crying about people copying them not so long ago?
The purpose is total control. You never leave their platform, there are no links out. You get all of your information and entertainment from their platform.
I think we're returning to CLIs mostly because typing remains one of the fastest ways we can communicate with our computers. The traditional limitation was that CLIs required users to know exactly what they wanted the computer to do. This meant learning all commands, flags etc.
GUIs emerged to make things easier for users to tell their computers what to do. You could just look at the screen and know that File > Save would save the file instead of remembering :w or :wq. They minimized friction and were polished to no end by companies like MSFT and AAPL.
Now that technology has got to a point where our computers now can bridge the gap between what we said and what we meant reasonably well, we can go back to CLIs. We keep the speed and expressiveness of typing but without the old rigidity. I honestly can't wait for the future where we evolve interfaces to things we previously only dreamt of before.
that being said, asking chatgpt to do research in 30 seconds for me that might require me to set aside an hour or two is causing me to make decisions about where to tinker and ideas to chase down much faster
It’s not so much a conspiracy theory as it is a perfect alignment of market forces. Which is to say, you don’t need a cackling evil mastermind to get conspiracy-like outcomes, just the proper set of deleterious incentives.
Atlas confuses me. Firefox already puts Claude or ChatGPT in my sidebar and has integrations so I can have it analyze or summarize content or help me with something on the page. Atlas looks like yet another Chromium fork that should have been a browser extension, not a revolutionary product that will secure OpenAI's market dominance.
I normally dont waste a lot of energy on politics.
But this feels truly dystopian. We here on HN are all in our bubble, we know that AI responses are very prone to error and just great in mimicking. We can differentiate when to use and when not (more or less), but when I talk to non-tech people in a normal city not close to a tech hub, most of them treat ChatGPT as the all-knowing factual instance.
They have no idea of the concious and unconcious bias on the responses, based on how we ask the questions.
Unfortunately I think these are the majority of the people.
If you combine all that with a shady Silicon Valley CEO under historical pressure to make OpenAI profitable after 64 billion in funding, regularly flirting with the US president, it seems always consequential to me that exactly what the author described is the goal. No matter the cost.
As we all feel like AI progress is stagnating and mainly the production cost to get AI responses is going down, this almost seems like the only out for OpenAI to win.
OpenAI should be 100% required to rev share with content creators (just like radio stations pay via compulsory licenses for the music they play), but this is a weird complaint:
> “sometimes this tool completely fabricates content, gives me a box that looks like a search box, and shows me the fabricated content in a display that looks like a web page when I type in the fake search box”
If a human wrote that same article about Taylor Swift, would you say it completely fabricates content? Most “articles” on the web are just rewrites of someone else’s articles anyway and nobody goes after them as bad actors (they should).
At this point, my adoption of AI tools is motivated by fear of missing out or being left behind (I’m a self-taught programmer in India running my own little SaaS).
I have memory and training enabled. What I can objectively say about Atlas is that I’ve been using it and I’m hooked — it’s made me roughly twice as productive. I solved a particular problem in half the time because Atlas made it easy to discover relevant information and make it actionable. That said, affording so much control to a single company does make me uneasy.
Not sure why this is getting downvoted, but to clarify what I meant:
With my repo connected via the GitHub app, I asked Atlas about a problem I was facing. After a few back-and-forth messages, it pointed me to a fork I might eventually have found myself — but only after a lot more time and trial-and-error. Maybe it was luck, but being able to attach files, link context from connected apps, and search across notes and docs in one place has cut a lot of friction for me.
This is amazing. My entire web browser session state for every private and personal website I sign onto every day will be used for training data. It's great! I love this. This is exactly the direction humans should be going in to not self-destruct. The future is looking bright, while the light in our brains dims to eventual darkness. Slowly. Tragically. And for what purpose exactly. So cool.
I have no plans in downloading Atlas either, but I think your browsing isn't used for training unless you opt in.
> By default, we don’t use the content you browse to train our models. If you choose to opt-in this content, you can enable “include web browsing” in your data controls settings. Note, even if you opt into training, webpages that opt out of GPTBot, will not be trained on.
https://openai.com/index/introducing-chatgpt-atlas/
[dead]
Seeing people work tirelessly to make The Matrix a reality is great. I can't wait!
"You're absolutely right!"
Being "anti-web" is the least of its problems.
This thing is an absolute security nightmare. The concept of opening up the full context of your authenticated sessions in your email, financial, healthcare or other web sites to ChatGPT is downright reckless. Aside from personal harm, the way they are pushing this is going to cause large scale data breaches at companies that harbour sensitive information. I've been the one pushing against hard blocking AI tools at my org so far but this may have turned me around for OpenAI at least.
I mean... Edge already have copilot integrated for years, and Edge actually have users, unlike Atlas. Not sure why people are getting shocked now...
Is this the security flaw thingy that stores OAuth or Auth0 tokens in sqllite database with overly permissive read privileges on it?
no I'm talking about the general concept of having ChatGPT passively able to read sensitive data / browser session state. Apart from the ever present risk they suck your data in for training, the threat of prompt injection or model inversion to steal secrets or execute transactions without your knowledge is extreme.
>We left command-line interfaces behind 40 years ago for a reason
No we didnt.
If at all anything, Claude Code's success disproved this
I think there is a misunderstanding who is meant by "we"
I mean, it's clear he means for the majority of users and OSes... not the HN crowd specifically.
Long live Doug McIlroy!
Came here to say this. As a software dev I'm deeply offended lol
Exactly, the whole world runs on CLI based software.
The comparison with Zork and the later comment about having to "guess" what to input to get a CLI to work we're also bizarre. He's obviously stretching really hard to make the analogy work.
Don't get me wrong, I'm not arguing that expansion of GUI based interfaces wasn't a good thing. There's plenty of things I prefer to interact with that way, and the majority of people wouldn't use computers if CLIs were still the default method. But what he's describing is literally not how any ever used the commandline.
Ask your non-dev peers if they know what the command line is and if they have ever used it or seen especially when most people use the web on their smartphone.
Network Engineers, Systems Engineers, Devops.
Anyone who deals with any kind of machine with a console port.
CLIs are current technology, that receive active development alongside GUI for a large range of purposes.
Heck Windows currently ships with 3 implementations. Command Prompt, Powershell AND Terminal.
I recently demonstrated ripgrep-all with fzf to search through thousands of PDFs in seconds on the command line and watched my colleague’s mind implode.
I'm aware. Just having a bit of fun. Obviously the vast majority of computer users don't even know what a command line is.
The Atlas implementation isn't great, but I'll pick something that tries to represent my interests every time. The modern commercial web is an adversarial network of attention theft and annoyance. Users need something working on their behalf to mine through the garbage to pull out the useful bits. An AI browser is the next logical step after uBlock.
It seems naive to expect a product by a company that desperately needs a lot of revenue to cover even a tiny part of investor money that it burned—where said product offers unprecedented opportunity to productize users in ways never possible before, and said company has previously demonstrated its disregard for ethics—to represent user’s interests.
It’s unlikely LLM operators can break even by charging per use, and it should be expected that they’ll race to capture the market by offering “free” products that in reality are ad serving machines, a time-tested business model that has served Meta and friends very well. The fact that Atlas browser is (and they don’t even hide it) a way to work around usage limits of ChatGPT should ring alarm bells.
Completely agree. Consumers won’t pay for anything online, which means every business model is user-hostile. Use the web for five minutes without an ad blocker and it’s obvious.
Atlas may not be the solution but I love the idea of an LLM that sits between me and the dreck that is today’s web.
Complete unrelated to what you actual wrote about, but... This is the second time in a week that I hear dreck used in english. Before, I never noticed it anywhere. First was from the "singing chemist" scene in Breaking Bad, and now in your writing. I wasn't aware english adopted this word from German as well. Weird, that I never heard it until just now, while the scene I watched is already 15 years or so old...
Merriam Webster dates the English word "dreck" to 1922, though it seems to come from the Yiddish drek and is therefore much older.
Baader-meinhof phenomenon
>> The modern commercial web is an adversarial network of attention theft and annoyance
It feels like $10 / month would be sufficient to solve this problem. Yet, we've all insisted that everything must be free.
It's a lot more than that. The U.S. online ad market is something like $400-500 billion, so that's about $100/mo per person. The problem is that some people are worth a lot more to advertise to than others. Someone who uses the internet a lot and has a lot of disposable income might be more like $500+ a month.
Where do I cut the $10/month? No like seriously, I'd easily pay $10/month to never see another ad, cookie banner, dark pattern, or have my information resold again. As long as that $10 is promised to never increase, other than adjustments for inflation.
But I can't actually make that payment - except maybe by purchasing a paid adblocker - where ironically the best open source option (uBlock Origin) doesn't even accept donations.
You'd need to pay a lot more, because advertisers pay way more than 10$ per month per user, you'd have to outpay the advertisers.
How much do advertisers pay per customer, and where can I find this analysis?
Kagi.com
Setting up Kagi is as big an improvement to search as an ad blocker is to your general internet experience. After about a week you forget how bad the bare experience is, and after a month you'll never go back.
I'm definitely behind some of my peers on adopting LLMs for general knowledge questions and web search, and I wonder if this is why. Kagi does have AI tools, but their search is ad free and good enough that I can usually find what I'm looking for with little fuss.
Add actual accessibility on top, and I'd happily pay 20 EUR/month.
Ads are not the only problem with the modern web. Accessibility (or, the lack thereof) is more of an issue for me. 20 years ago, we were still hopeful the Internet would bring opportunities to blind people. These days, I am convinced the war has been lost and modern web devs and their willingness to adopt every new nonesense are the people that hurt me most in current society.
One reason I now often go to ChatGPT instead of many Google queries is that the experience is ads free, quick and responsive.
That said, don't be lured, you know they're already working on ways to put ads and trackers and what not inside ChatGPT and Atlas, those 20$ sound won't pay enough to recoup all that investment and cost and maximize profits.
So I think we should be careful what we wish for here.
> quick and responsive
This is kind of surprising, because those are precisely the ways I would say that a Web search is better than ChatGPT. Google is generally sub second to get to results, and quite frequently either page 1 or 2 will have some relevant results.
With ChatGPT, I get to watch as it processes for an unpredictable amount of time, then I get to watch it "type".
> ads free
Free of ads where ChatGPT was paid to deliver them. Because it was trained on the public Internet, it is full of advertising content.
Update: Example query I just did for "apartment Seville". Google completed in under a second. All the results above the fold are organic, with sponsored way down. Notably the results include purchase, long-term and vacation rental sites. The first 3 are listing sites. There's an interactive map in case I know where I want to go; apartments on the map include links to their websites. To see more links, I click "Next."
ChatGPT (MacOS native app) took ~9 seconds and recommended a single agency, to which it does not link. Below that, it has bullet points that link to some relevant sites, but the links do not include vacation rentals. There are 4 links to apartment sites, plus a link to a Guardian article about Seville cracking down on illegal vacation rentals. To see more links, I type a request to see more.
For all the talk about Google burying the organic links under a flood of ads, ChatGPT shows me far fewer links. As a person who happily pays for and uses ChatGPT daily, I think it's smart to be honest about its strengths and shortcomings.
Google is SEOed a lot. And while apartment sevile is a subset where Google is probably very good. For many things it gives me very bad results e.g. searching for affordable haircut always gives me a yelp link(there is a yelp link for ever adjective + physical storefront SMB).
That being said I've never really come across on what are some good general ways to Google to give me good results.
I know some tricks e.g. Filetype:PDF , use scholar for academic search, use "site:...". smth like "site:reddit.com/r/Washington quiet cafe" for most things people would want to do in a city, because people generally ask about those things on community forums.
But I have a poor time with dev related queries because 1/2 the time it's seo'd content and when I don't know enough about a subject, LLMs generally gives me a lot of lines of inquiries(be careful of X and also consider Y) that I would not bother to ask cause I don't know what I don't know.
I use google and ChatGPT for totally different reasons - ChatGPT is generally far better for unknown topics, which google is better if I know exactly what I’m after.
If I’m trying to learn about a topic (for example, how a cone brake works in a 4WD winch), then ChatGPT gives me a great overview with “ Can you explain what a cone brake is and how it works, in the context of 4WD winches?” while google, with the search “4wd winch cone brake function explained”, turns up a handful of videos covering winches (not specifically cone brakes) and some pages that mention them without detailing their function. ChatGPT wins here.
If I were trying to book a flight I’d never dream of even trying to use ChatGPT. That sort of use case is a non-starter for me.
As bad as AI experiences often are, I speculate that we are actually in a golden age before they are fully enshittified.
This sentiment has been rolling around in my head for a while. I assume one day I'll be using some hosted model to solve a problem, and suddenly I won't be able to get anything out beyond "it would actually work a lot better if you redeployed your application on Azure infra with a bunch of licensed Windows server instances. Here's 15 paragraphs about why.."
I found myself avoiding google lately because of their AI responses at the top. But you can block those and now google is much nicer.
Gopher. Gemini (the protocol not the AI). IRC.
Ublock Origin allows me to control what I see while that information is still in its original context so that I can take that into account when doing research, making decisions, etc.
But isn't this, instead, letting a third party strip that context away and give it its own context so that you can't make those choices and decisions properly? Information without context is, to me, nearly worthless.
And even if you believe they are neutral parties and have your own interests at heart (which, quite frankly, I think is naive), once companies like that know everything about you, you don't think they'll abuse that knowledge?
not sure about that. I'll be happy with ublock thanks
uBlock just takes stuff off of a page that shouldn't be there in the first place. All the content that should be there is still there, unchanged.
An AI browser is choosing to send all the stuff you browse, to a third party without a demonstrated interest in keeping it all private, and getting back stuff that might or might not be true to the original content. Or maybe not even true at all.
Oh and - Atlas will represent your interests, right up until OpenAI decides it's not in their financial interest to do so. What do you do when the entire web browser UI gets enshittified?
Wow! Amazing post! You really nailed the complexities of AI browsers in ways that most people don't think about. I think there's also a doom paradox where if more people search with AI, this disincentives people from posting on their own blog and websites where incentives are usually ads could help support them. If AI is crawling and then spitting back information from your blog (you get no revenue), is there a point to post at all?
I tested Google Search, Google Gemini and Claude with "Taylor Swift showgirl". Gemini and Claude gave me a description plus some links. Both were organized better than the Google search page. If I didn't like the description that Claude or Google gave me I could click on the Wikipedia link. Claude gave me a link to Spotify to listen while Gemini gave me a link to YouTube to watch and lisen.
The complaint about the OpenAI browser seems to be it didn't show any links. I agree, that is a big problem. If you are just getting error prone AI output then it's pretty worthless.
Every professional involved in saas, web , online content creation thinks the web is a beautiful thing.
In reality the fact of social media means web failed long time ago, and it only serves a void not taken by mobile apps , and now llm agents.
Why do I need to read everything about tailor Swift on you her web site , if I don’t know a single song of her ? ( I actually do ) .
I don’t want a screaming website tells me about her best new album ever , and her tours if LLM knows I don’t like pop music . The other way around if you like her you’d like a different set of information. Website can do that for you
This and the new device that OpenAI is working on is more of a general strategy to make a bigger moat by having more of an ecosystem so that people will keep their subscriptions and also get pro.
Atlas strategy:
- Atlas slurps the web to get more training data, bypassing Reddit blocks, Cloudflare blocks, paywalls, etc. It probably enriches the data with additional user signals that are useful.
- Atlas is an attempt to build a sticky product that users won't switch away from. An LLM or image model doesn't really have sticky attachment, but if it starts storing all of your history and data, the switching costs could become immense. (Assuming it actually provides value and isn't a gimmick.)
- Build pillars of an interconnected platform. Key "panes of glass" for digital lives, commerce, sales intent, etc. in the platformization strategy. The hardware play, the social network play -- OpenAI is trying to mint itself as a new "Mag 7", and Atlas could be a major piece in the puzzle.
- Eat into precious Google revenue. Every Atlas user is a decrease in Google search/ads revenue.
Ycombinator Application: Replace entire World Wide Web with my own WWW
Response: Already achieved by OpenAI!
https://stockanalysis.com/list/magnificent-seven/
I guess Mag 7 is the new FAANG, not the mag-7 shotgun
> There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
:skull:
This article is deep, important, and easily misinterpreted. The TL;DR is that a plausible business model for AI companies is centered around surveillance advertising and content gating like Google or Meta, but in a much more insidious and invasive form.
Worth reading to the end.
Atlas feels more like a task tool than a browser. It’s fast, but we might lose the open web experience for convenience.
The SV playbook is to create a product, make it indispensable and monopolise it. Microsoft did it with office software. Payment companies want to be monopolies. Social media are of course the ultimate monopolies - network effects mean there is only one winner.
So I guess the only logical next step for Big AI is to destroy the web, once they have squeezed every last bit out of it. Or at least make it dependent on them. Who needs news sites when OpenAI can do it? Why blog - just prompt your BlogLLM with an idea. Why comment on blogs - your agent will do it for you. All while avoid child porn with 97% accuracy - somerhing human curated content surely cannot be trusted to do.
So I am 0% surprised.
The thing about command lines is off base, but overall the article is right that the ickiness of this thing is exceeded only by its evil.
It's really crazy that there is an entire ai generated internet. I have zero clue what the benefit of using this would be to me.Even if we argue that it is less ads and such, that would only be until they garner enough users to start pushing charges. Probably through even more obtrusive ads.
I also need to laugh. Wasn't open AI just crying about people copying them not so long ago?
The purpose is total control. You never leave their platform, there are no links out. You get all of your information and entertainment from their platform.
It’s also a classic tactic of emotional abuse:
https://www.womenslaw.org/about-abuse/forms-abuse/emotional-...
I think the idea of "we're returning to the command line" is astute tbh, I've felt that subconciously and I think the author put it into words for me.
The article does taste a bit "conspiracy theory" for me though
I think we're returning to CLIs mostly because typing remains one of the fastest ways we can communicate with our computers. The traditional limitation was that CLIs required users to know exactly what they wanted the computer to do. This meant learning all commands, flags etc.
GUIs emerged to make things easier for users to tell their computers what to do. You could just look at the screen and know that File > Save would save the file instead of remembering :w or :wq. They minimized friction and were polished to no end by companies like MSFT and AAPL.
Now that technology has got to a point where our computers now can bridge the gap between what we said and what we meant reasonably well, we can go back to CLIs. We keep the speed and expressiveness of typing but without the old rigidity. I honestly can't wait for the future where we evolve interfaces to things we previously only dreamt of before.
that being said, asking chatgpt to do research in 30 seconds for me that might require me to set aside an hour or two is causing me to make decisions about where to tinker and ideas to chase down much faster
can never go back
It’s not so much a conspiracy theory as it is a perfect alignment of market forces. Which is to say, you don’t need a cackling evil mastermind to get conspiracy-like outcomes, just the proper set of deleterious incentives.
Atlas confuses me. Firefox already puts Claude or ChatGPT in my sidebar and has integrations so I can have it analyze or summarize content or help me with something on the page. Atlas looks like yet another Chromium fork that should have been a browser extension, not a revolutionary product that will secure OpenAI's market dominance.
1.0 - algorithmic ranking of real content, with direct links
2.0 - algorithmic feeds of real content with no outbound links - stay in the wall
3.0 - slop infects rankings and feeds, real content gets sublimated
4.0 - algorithmic feeds become only slop
5.0 - no more feeds or rankings, but on demand generative streams of slop within different walled slop gardens
6.0 - 4D slop that feeds itself, continuously turning in on itself and regenerating
I normally dont waste a lot of energy on politics.
But this feels truly dystopian. We here on HN are all in our bubble, we know that AI responses are very prone to error and just great in mimicking. We can differentiate when to use and when not (more or less), but when I talk to non-tech people in a normal city not close to a tech hub, most of them treat ChatGPT as the all-knowing factual instance.
They have no idea of the concious and unconcious bias on the responses, based on how we ask the questions.
Unfortunately I think these are the majority of the people.
If you combine all that with a shady Silicon Valley CEO under historical pressure to make OpenAI profitable after 64 billion in funding, regularly flirting with the US president, it seems always consequential to me that exactly what the author described is the goal. No matter the cost.
As we all feel like AI progress is stagnating and mainly the production cost to get AI responses is going down, this almost seems like the only out for OpenAI to win.
What now remains is, after hearing glowing feedback, Satya making this the default browser in Windows as part of Microsoft and OpenAI's next chapter.
Eh I use ChatGPT for so many things I realize how many projects I used to just let go by.
[dead]
1 - nobody cares about being "pro-web" or "anti-web"
2 - we didn't leave command-line interfaces behind 40 years ago
1 - I care
2 - That's an entirely different situation and you know it.
OpenAI should be 100% required to rev share with content creators (just like radio stations pay via compulsory licenses for the music they play), but this is a weird complaint:
> “sometimes this tool completely fabricates content, gives me a box that looks like a search box, and shows me the fabricated content in a display that looks like a web page when I type in the fake search box”
If a human wrote that same article about Taylor Swift, would you say it completely fabricates content? Most “articles” on the web are just rewrites of someone else’s articles anyway and nobody goes after them as bad actors (they should).
At this point, my adoption of AI tools is motivated by fear of missing out or being left behind (I’m a self-taught programmer in India running my own little SaaS).
I have memory and training enabled. What I can objectively say about Atlas is that I’ve been using it and I’m hooked — it’s made me roughly twice as productive. I solved a particular problem in half the time because Atlas made it easy to discover relevant information and make it actionable. That said, affording so much control to a single company does make me uneasy.
Not sure why this is getting downvoted, but to clarify what I meant:
With my repo connected via the GitHub app, I asked Atlas about a problem I was facing. After a few back-and-forth messages, it pointed me to a fork I might eventually have found myself — but only after a lot more time and trial-and-error. Maybe it was luck, but being able to attach files, link context from connected apps, and search across notes and docs in one place has cut a lot of friction for me.