aresant 6 hours ago

- This is a proper MSFT resource allocation commentary, not an industry wide canary in coalmine

- MSFT CapEx is up 45% YoY, positioning for inference dominance

- MSFT is prioritizing AI inference for millions of customers over massive AI training for OpenAI

- Losing inference demand risks clients moving to AWS/GCP

- MSFT taking bet that industry overbuild will lead to more attractive leasing terms in future vs owning

- OpenAI chases AGI; MSFT chases scalable, affordable AI that drives Azure adoption

- Leadership is risk adjusted - they win if OpenAI succeeds, and win if open-source wins

  • OutOfHere 7 minutes ago

    Huh. It has everything to do with OpenAI planning to move away from Microsoft datacenters to its own datacenters.

  • MichaelZuo 4 hours ago

    If they are taking the bet that industry will overbuild, how is it not a canary in a coal mine?

    There is almost certainly at least one person in Redmond working on this that’s smarter and better informed than you, just going off probability.

    • snapcaster 3 hours ago

      >There is almost certainly at least one person in Redmond working on this that’s smarter and better informed than you, just going off probability.

      this isn't the actual test though right? it's surely true, and yet companies make bad decisions all the time. It needs to be conditioned on "someone smarter and better informed than you _with influence in the organization sufficient to enact change_"

qgin 12 hours ago

A few things from the Dwarkesh interview with Satya:

* He sees data center leases getting cheaper in near future due to everyone building

* He’s investing big in AI, but in every year there needs to be a rough balance between capacity and need for capacity

* He doesn’t necessarily believe in fast takeoff AGI (just yet)

* Even if there is fast takeoff AGI, he thinks human messiness will slow implementation

* It won’t be a winner-take-all market and there will be plenty of time to scale up capacity and still be a major player

  • djtango 11 hours ago

    > Even if there is fast takeoff AGI, he thinks human messiness will slow implementation

    Five years ago during covid all these clunky legacy businesses couldn't figure out a CSV let alone APIs but suddenly with AGI they are going to become tech masters and know how to automate and processize their entire operations?

    I found it very amusing that at the turn of the decade "digitalisation" was a buzzword as Amazon was approaching its 25th anniversary.

    Meanwhile huge orgs like the NHS run on fax and was crippled by excel row limits. Software made a very slow dent in these old important slow moving orgs. AI might speed up the transition but I don't see it happening overnight. Maybe 5 years if we pretend smartphone adoption is indicative of AGI and humanoid robot rollout

    • qgin 11 hours ago

      I think you don’t have widespread adoption until AI takes the form of a plug-and-play “remote employee”.

      You click a button on Microsoft Teams and hire “Bob” who joins your team org, gets an account like any other employee, interacts over email, chat, video calls, joins meetings, can use all your software in whatever state it’s currently in.

      It has to be a brownfield solution because most of the world is brownfield.

      • ethbr1 3 hours ago

        > I think you don’t have widespread adoption until AI takes the form of a plug-and-play “remote employee”.

        Exactly. The problem with the AGI-changes-everything argument is that it indirectly requires "plug-and-play" quality AGI to happen before / at the same time as specialized AGI. (Otherwise, specialized AGI will be adopted first)

        I can't think of a single instance in which a polished, fully-integrated-with-everything version of a new technology has existed before a capable but specialized one.

        E.g. computers, cell phones, etc.

        And if anyone points at the "G" and says "That means we aren't talking about specialized in any way," then you start seeing the technological unlikeliness of the dominoes falling as they'd need to for AGI fast ramp.

      • kyle_grove 8 hours ago

        Honestly, I think the mode that will actually occur is that incumbent businesses never successfully adopt AI, but are just outcompeted by their AI-native competitors.

        • djtango 7 hours ago

          Yes this is exactly how I see it happening - just like how Amazon and Google were computer-native companies

          • joquarky 6 hours ago

            And Sears had all the opportunity to be Amazon

            • red-iron-pine 5 hours ago

              Sears also did everything it could to annihilate itself while dot-com was happening.

              their CEO was a believer of making his departments compete for resources leading to a brutal, dysfunctional clusterfuck. rent seeking behavior on the inside as well as outside.

      • jajko 9 hours ago

        Completely unusable in any bank, or almost any organization dealing with data secrecy. You have complex, often mandatory processes to onboard folks. Sure, these can be improved but hiring some 'Bob' would be far from smooth sailing.

        Big enough corps will eventually have their own trusted 'Bobs' just like they have their own massive cluster farms (no, AWS et al is not a panacea and its far from cheap&good solution).

        Giving any form of access to some remote code into internal network of a company? Opsec guys would never ack that, there is and always will be malice coming from potentially all angles.

      • lazide 9 hours ago

        Hah, that isn’t a brownfield solution.

        These orgs could hire someone who could solve these issues right now (and for the last decade) if they would allow those people to exist in their org.

        The challenge is precisely that those people (or people with that capability) aren’t allowed to exist.

        • AlexCoventry 4 hours ago

          > those people (or people with that capability) aren’t allowed to exist

          I'm not sure what personal characteristics or capabilities you're referring to, FWIW.

        • dnissley 5 hours ago

          "Bob" in this example is just some other random individual contributor, not some master of the universe. E.g. they would have the title "associate procurement specialist @ NHS" and join and interact on zoom calls with other people with that title in order to do that specific job.

          • Earw0rm 2 hours ago

            Right, but these jobs are inefficient mostly because of checks and balances. So unless you have a bunch of AIs checking one another's work (and I'm not sure I can see that getting signed off) doesn't it just move the problem slightly along the road?

            There's an argument here something like.. if you can replace each role with an AI, you can replace multiple with a single AI, why not replace the structure with a single person?

            And the answer is typically that someone has deemed it significant and necessary that decision-making in this scenario be distributed.

    • Delphiza 10 hours ago

      As a first order of business, a sufficiently advanced AGI would recommend that we stop restructuring and changing to a new ERP every time an acquisition is made or the CFO changes, and to stop allowing everyone to have their own version of the truth in excel.

      As long as we have complex manual processes that even the people following them can barely remember the reason why they exist, we will never be able to get AI to smooth it over. It is horrendously difficult for real people to figure out what to put in a TPS report. The systems that you refer to need to be engineered out or organisations first. You don't need AI for that, but getting rid of millions of excel files is needed before AI can work.

      • kjellsbells an hour ago

        I dont know that getting rid of those wacky Excel sheets is a prerequisite to having AI work. We already have people like Automation Anywhere watching people hand carve their TPS reports so that they can be recreated mechanistically. Its a short step from feeding the steps to a task runner to feeding them to the AI agent.

        Paradigm shifts in the technology do not generally occur simultaneously with changes in how we organize the work to be done. It takes a few years before the new paradigm backs into the workflow and changes it. Lift and shift was the path for several years before cloud native became a thing, for example. People used iPhone to call taxi companies, etc.

        It would be a shame to not take the opportunity to tear down some old processes, but, equally, sometimes Chesterton's fence is there for good reason.

    • mattlondon 10 hours ago

      But why are these sort of orgs slow and useless? I don't think it is because they have made a conscious decision to do so - I think it is more than they do not have the resources to do anything else. They can't afford to hire in huge teams of engineers and product managers and researchers to modernize their systems.

      If suddenly the NHS had a team of "free" genuinely phd-level AGI engineers working 24/7 they'd make a bunch of progress on the low-hanging fruit and modernize and fix a whole bunch of stuff pretty rapidly I expect.

      Of course the devil here is the requirements and integrations (human and otherwise). AGI engineers might be able to churn out fantastic code (some day at least), but we still need to work out the requirements and someone still needs to make decisions on how things are done. Decision making is often the worst/slowest thing in large orgs (especially public sector).

      • typewithrhythm 10 hours ago

        It's not a resource problem; everyone inside the system has no real incentive to do anything innovative; improving something incrementally is more likely to be seen as extra work by your colleagues and be detrimental to the person who implemented it.

        What's more likely is a significantly better system is introduced somewhere, NHS can't keep up and is rebuilt by an external. (Or more likely it becomes a inferior system of a lesser nation as the UK continues its decline).

        • qgin 9 hours ago

          I think this is where the AGI employee succeeds where other automation doesn’t. The AGI employee doesn’t require the organization to change. It’s an agent that functions like a regular employee in the same exact system with all of its inefficiencies, except that it can do way more inefficient work for a fraction of the cost of a human.

          • _heimdall 9 hours ago

            Assuming we get to AGI and companies are willing to employ them in lieu of a human employee, why would they stop at only replacing small pieces of the org rather than replacing it entirely with AGI?

            AGI, by most definitions at least, would be better than most people at most things. Especially if you take OpenAI's definition, which boils it down only to economic value, a company would seemingly always be better off replacing everything with AGI.

            Maybe more likely. AGI would just create superior businesses from scratch and put human companies out if business.

          • sevensor 9 hours ago

            This is a huge selling point, and it will really differentiate the orgs that adopt it from those who don’t. Eventually the whole organization will become as inscrutable as the agents that operate it. From the executive point of view this is indistinguishable from having human knowledge workers. It’s going to be interesting to see what happens to an organization like that when it faces disruptive threats that require rapid changes to its operating model. Many human orgs fall apart faced by this kind of challenge. How will an extra jumbo pattern matcher do?

          • mplewis 4 hours ago

            What you are describing is science fiction and is not worthy of serious discussion.

      • djtango 10 hours ago

        IMO it comes from inertia. People at the top are not digital-native. And they're definitely not AI-native.

        So you're retrofitting a solution onto a legacy org. No one will have the will to go far enough fast enough. And if they didn't have the resources to engineer all these software migrations who will help them lead all these AI migrations?

        Are they going to go hands off the wheels? Who is going to debug the inevitable fires in the black box that has now replaced all the humans?

        • ghaff 10 hours ago

          And many of the users/consumers are not digital-native either. My dad is not going to go to a website to make appointments or otherwise interact with the healthcare system.

    • markus_zhang 11 hours ago

      In fact most of the industries out there are still slow and inefficient. Some physicians only accept phone calls for making appointments. Many primary schools only take phone calls and an email could go either way just not their way.

      It's just we programmers who want to automate everything.

      • semi-extrinsic 4 hours ago

        Today I spent 55 minutes in a physical store trying to get a replacement for two Hue pendant lights that stopped working. The lights had been handed in a month ago and diagnosed as "can't be repaired" two weeks ago. All my waiting time today was spent watching different employees punching a ridiculous amount of keys on their keyboards, and having to get manual approval from a supervisor (in person) three times. I am now successfully able to wait 2-6 weeks for the replacement lights to arrive, maybe.

        When people say AI is going to put people out of work, I always laugh. The people I interacted with today could have been put out of work by a well-formulated shell script 20 years ago.

        • markus_zhang 13 minutes ago

          Yeah. I hope software engineers slow down a bit. We are good enough. There is no need to push ourselves out of jobs.

      • graemep 11 hours ago

        Given how bad some of the apps and websites are I am not sure phone calls are any worse! They are also less prone to data breaches and the like.

      • QuadmasterXLII 10 hours ago

        “ only takes phone calls for appointments” is a huge selling point for a physicians office. People are very tired of apps.

        • bronco21016 9 hours ago

          I’d far prefer a well done app. It’s so frustrating doing a back and forth of dates when trying to make an appointment.

        • markus_zhang 8 hours ago

          Yeah maybe we programmers should start doing that too. Why do we use Teams, Slack or even emails?

          We should submit our code to a mainframe and everyone is going to improve their skills too.

    • qwertox 11 hours ago

      > Meanwhile huge orgs like the NHS run on fax

      I thought this was a German-only thing?

      • ta1243 10 hours ago

        Not convinced.

        In 2018:

        https://www.gov.uk/government/news/health-and-social-care-se...

        > Matt Hancock has banned the NHS from buying fax machines and has ordered a complete phase-out by April 2020.

        The NHS is quite federated. Hell many parts of it are independent companies. Some trusts have decent modern systems though - I had to go for a test just before christmas - I'd phoned up my GP in the morning got an appointment for half an hour later, he ordered a test, said go to one of these 8 centres, so I went to one about half an hour away (I live a fair way away from a major town). Had the test, by the time I'd had lunch and driven back home I had another call from the GP asking me to come in that evening, the appointment was created by the GP and read seconds later at the hospital, the test was done there and results reported back again at the click of a system at the GP.

        But that's just my local trust. Go 10 miles west and it's another trust and they have different systems. And I had to go to one of the test centres in the trust, I couldn't go to one in a neighbouring trust as they have different systems and there's no/limited interconnects.

      • ghaff 10 hours ago

        The healthcare system I largely use in the US was bought up by another larger system. They are now primarily on an electronic system (which for me is welcome) but some of the offices still use faxes as well.

        • dboreham 5 hours ago

          I had a stint in healthcare software and can confirm the fax thing is real. That said, I had a hunch it's all cloud-hosted fax sending systems delivering to cloud-hosted fax-to-pdf systems. I challenged by colleagues to show me an actual fax machine and never had any takers.

    • llm_trw 10 hours ago

      Counterpoint: the PC.

      The minicomputer revolution was meant to replace all the big iron in the basement with computers shared by teams. This lasted a decade until we got computers meant for one person and we've used that format of computation for nearly 50 years now.

      Everyone is talking about how AI will replace people.

      It won't.

      AI without HI is useless. However, HI with AI is much better at handling all the bullshit office work that needs to get done—anyone who doesn't understand this is either an idiot, a troglodyte, or already has an expendable army of minions doing their work.

      The surge in AI demand won't be top-down but bottom-up. That stupid office birthday wish card being passed around? I can Shakespeare the motherfucker. And that email to the big boss telling him the project is doomed and that he needs a compass to find his own ass? Let's ask R1.

      >My boss is a moron who can't find his ass without a compass. His latest tech initiative will cost us millions and make the best staff leave. Write a professional letter that covers those points without being harsh.

      ---

      Subject: Request for Review of Upcoming [Tech Initiative]

      Dear [Recipient Name/Team Name],

      I hope this message finds you well. I wanted to share some thoughts regarding our upcoming [Tech Initiative Name], as I believe open dialogue will ensure its success while aligning with our team’s strengths and long-term goals.

      While the initiative’s intent to innovate is commendable, I’ve observed several potential challenges that may impact its effectiveness:

      Financial Impact: Based on preliminary analysis, the projected costs appear to be significantly higher than initially estimated. A detailed cost-benefit review or phased rollout might help mitigate financial risk while allowing us to test feasibility.

      Team Retention Concerns: Many of our top performers have expressed reservations about the initiative’s current structure, particularly around [specific pain point, e.g., workflow disruption, lack of clarity]. Retaining their expertise will be critical to execution, and their insights could refine the plan to better address on-the-ground needs.

      To ensure alignment with both our strategic objectives and team capacity, I respectfully suggest:

      Conducting a collaborative risk assessment with department leads.

      Piloting the initiative in a controlled environment to gather feedback.

      Hosting a forum for staff to voice concerns/solutions pre-launch.

      I’m confident that with adjustments, this project can achieve its goals while preserving morale and resources. Thank you for considering this perspective—I’m eager to support any steps toward a sustainable path forward.

      Best regards,

      • nrclark 7 hours ago

        To be honest, that kind of sounds like a dystopian hell: chatGPT writing memos because we can't be arsed, and the reading the same memos because neither can the recipient. Why even bother with it?

        • llm_trw 2 hours ago

          It is heaven.

          With a well working rag system you can find the reason why any decision was made so long as it was documented at some point somewhere. The old share point drives with a billion unstructured word documents starting from the 1990s is now an asset.

  • lolinder 10 hours ago

    This is a very refreshing take.

    Our current intellectual milieu is largely incapable of nuance—everything is black or white, on or off, good or evil. Too many people believe that the AI question is as bipolar as every other topic is today: Will AI be godlike or will it be worthless? Will it be a force for absolute good or a force for absolute evil?

    It's nice to see someone in the inner circles of the hype finally acknowledge that AI, like just about everything else, will almost certainly not exist at the extremes. Neither God nor Satan, neither omnipotent nor worthless: useful but not humanity's final apotheosis.

  • lenerdenator 9 hours ago

    > human messiness will slow implementation

    If by "messiness" he means "the general public having massive problems with a technology that makes the human experience both philosophically meaningless and economically worthless", then yeah, I could absolutely see that slowing implementation.

  • mikesurowiec 3 hours ago

    I expect data centers will become more expensive precisely because everyone is building at the same time. Supply chain crunch

    • kridsdale1 26 minutes ago

      Temporary. During their operating and depreciating long tail phase the over supply will drive down costs for users. Like fiber cables.

  • adverbly 6 hours ago

    > * He doesn’t necessarily believe in fast takeoff AGI (just yet)

    This is so based... I would probably have given slow take off a 1% chance of happening 10 years ago, but today I'd put that somewhere like 30%.

  • heresie-dabord 10 hours ago

    Is this the transcript of the interview (podcast) with Dwarkesh?

    https://www.dwarkeshpatel.com/p/satya-nadella

    Because if so,

    > He doesn’t necessarily believe in fast takeoff AGI (just yet)

    the term "fast takeoff AGI" does not appear in the transcript.

    • qgin 9 hours ago

      Some paraphrasing on my part, but what he does say is at odds with fast takeoff AGI. He describes AI as a tool that gradually changes human work at a speed that humans adapt to over time.

      • heresie-dabord a minute ago

        Given that you invented the term in your paraphrase, what do you mean by "fast takeoff AGI"?

toomuchtodo 12 hours ago

Microsoft CEO Admits That AI Is Generating Basically No Value - https://futurism.com/microsoft-ceo-ai-generating-no-value

Bit of a click bait title, but it certainly seems like the realization is setting in that the hype has exceeded near term realistic expectations and some are walking back claims (for whatever reason; honesty, derisking against investor securities suits, poor capital allocation, etc).

Nadella appears to be the adult in the room, which is somewhat refreshing considering the broad over exuberance.

  • apsec112 5 hours ago

    He doesn't actually say that, the (very biased and polemical) article writer seems to have made that up. The actual quote is:

    "Us self-claiming some [artificial general intelligence] milestone, that's just nonsensical benchmark hacking to me. So, the first thing that we all have to do is, when we say this is like the Industrial Revolution, let's have that Industrial Revolution type of growth. The real benchmark is: the world growing at 10 percent. Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry."

    That's a completely different statement from "AI is generating no value"!

  • mrtksn 11 hours ago

    IMHO anyone who started using AI seriously:

    1) Wouldn't want to go back

    2) Wouldn't believe that it's about to replace human intellectual work

    In other words AI got advanced enough to do amazing things but not 500B or T level of amazing and people with the money are not convinced that it will be anytime soon.

    • jdlshore 33 minutes ago

      I went back. It’s fine at solving small problems and jump-starting an investigation, but only a small step better than a search engine. It’s no good for deep work. Any time I’ve used it to research something I know well, it’s got important details wrong, but in a confident way that someone without my knowledge would accept.

      RLHF trains it to fool humans into thinking it’s authoritative, not to actually be correct.

    • righthand 9 hours ago

      What is using AI seriously?

      I’ve tried to use it as a web search replacement and often the information is generic or tells me what I already know or wrong.

      I’ve used a code suggestions variant long before OpenAI hype started and while sometimes useful, rarely is it correct or helpful on getting over the next hurdle.

      Any code from my coworkers is now just AI slop they glanced over once. Then I spend a long time reviewing and fixing their code.

      I really don’t find spending time writing long form questions so a bot can pretend to be human all that time saving, especially if I have to clarify or reword it to a specific “prompt-engineer” quality sentence. I can usually find the results faster typing in a few keywords and glancing at a list of articles. My built in human speed reading can determine if what I need is probably in an article.

      LLM seriousness has made my job more difficult. I would prefer if people did go back.

      • mrtksn 9 hours ago

        In my case its coding real world apps that people use and pay money for. I no longer personally type most of my code, Instead I describe stuff or write pseudo code that LLMs end up converting into the real thing.

        It's very good at handling the BS part of coding but also its very good at knowing things that I don't know. I recently used it to hack a small bluetooth printer which requires its own iOS app to print, using DeepSeek and ChatGPT I was able to reverse engineer the printer communication and then create an app that will print whatever I want from my macOS laptop.

        Before AI I would have to study how Bluetooth works now I don't have to. Instead, I use my general knowledge of protocols and communications and describe it to the machine and I'm asking for ideas. Then I try things and ask the stuff that I noticed but I don't understand, then I figure out how this particular device works and then describe it to the machine and ask it to generate me code that will do the thing that I discovered. LLMs are amazing at filling the gaps in a patchy knowledge, like my knowledge of Bluetooth. Because I don't know much about Bluetooth, I ended up creating a CRUD for Bluetooth because that's what I needed when trying to communicate and control my bluetooth devices(it's also what I'm used to from Web tech). I'm bit embarrassed about it but I think I will release it commercially anyway.

        If I have a good LLM under my hand, I don't have to know specialised knowledge on frameworks or tools. General understanding of how things works and building up from there is all I need.

        It's like a CNC machine but for coding.

        • righthand 8 hours ago

          I see, for single operator, no customers products it works nicely. You may find you use it less and less and will actually require that Bluetooth knowledge eventually as you grow a product.

          LLMs so far seem to be good at developing prototype apps. But most of my projects already have codegen and scaffolding tools so I guess I don’t get that use out of them.

          I predict that once you release your embarrassing app, you will find all the corner cases and domain secrets come rearing out with little ability of the LLM to help you (especially with Bluetooth).

          • mrtksn 8 hours ago

            The Bluetooth app thing is just an example of LLMs helping me build something I don't have beyond-basics knowledge of.

            For other stuff, I still find it very useful because why would I bother to code something non-novel when I can just tell the LLM what I need?

            For example, if I need a code that finds the device that given characteristics belongs(bluetooth stuff, again) to I can just tell the LLM to write it for me. It doesn't take a genius to write such a code, its elemental stuff and I would rather not spend my working memory on remembering the structures and names of variables. I copy+paste the current class that handles the bluetooth comms, tell it that I need a function for sending data to the printer and it gives me back the result. There's no art in writing such a code, its standard code for an API and I would prefer not to bother with it.

          • eitally 4 hours ago

            You seem to be overestimating the quality of a many production software products.

            • righthand 4 hours ago

              Don’t worry I am not. I understand that most prod deployed softwares are prototypes before LLMs came around.

        • kamranjon 8 hours ago

          “Before AI I would have to study how Bluetooth works now I don't have to.”

          And

          “It's very good at handling the BS part of coding…”

          This is the part that I think is difficult in a team situation.

          Learning and understanding is the important part, and certainly isn’t BS.

          I understand that it really can make it seem like velocity has increased when you really are shipping things that more or less “work”, but it’s really a good practice to understand your code.

          I’ve had to spend a significant amount of time fixing work that was admittedly generated using AI by other engineers, and I really fear engineers are beginning to trade deep understanding for the high of getting something that “works” with little effort.

          It might “work” but you might be ignoring the work everyone around you is doing to clean up your brittle code that doesn’t scale and wasn’t thought through at inception.

          • saturn8601 5 hours ago

            You have an entirely valid worry and I get a bit scared at my use of AI because of this. I fear that dev jobs might go away or become third world only jobs like electronics manufacturing but in the mean time its scary how much it atrophies your mind. At the same time, it has opened up a universe of answers to questions I wouldn't normally ask because the bar was too high. Everyone seems to have their own unique stories.

            For example just today, I dumped a crash log from the Mac version of Microsoft Remote Desktop into it. This damn app locks up 10 times a day for me causing a "Force Quit" event and subsequent crash dump to be generated. Normally what can I do with that crash dump other than send it off to Apple/Microsoft? It identified where it thought the crash was coming from: excessive right clicking causing some sort of foundational error in their logic. Avoiding right clicking has solved the issue for me. Now that I write this out, I could have spent hours upon hours finding a needle in a haystack and that would probably made me a better developer but the bar is too high, there is too much other work I have to get done than to chase this. Instead I would have just lived with it. Now I have some closure at least.

            Again it seems like everyone has got their own unique stories. Is AI taking everything over? Not yet. Can I go back to pre-AI? No, its like going back to Windows 95.

          • rurp 5 hours ago

            The only part I would quibble with is the fear that superficial AI generated code becomes widespread. It's not that I think this won't happen, and I wouldn't want it on my team, but I think it could actually increase demand for competent software engineers.

            I got into coding about a decade ago when cheap outsourcing had been all the rage for a number of years. A lot of my early experience was taking over very poorly written apps that had started off with fast development and then quickly slowed down as all of the sloppy shortcuts built up and eventually ground development to a halt. There's a decent chance LLMs lead to another boom in that kind of work.

          • mrtksn 8 hours ago

            It is effective because you can spend your mental energy on the things that matter, things that make difference.

            Code quality actually doesn't matter when you remove the human from the loop as long as it works correctly because it becomes something made by a machine to be interpreted by a machine.

            • kamranjon 7 hours ago

              Code isn’t a binary scale of works or doesn’t - there is inefficient code and insecure code and everything else in between that still technically “works” - but a lack of understanding will eventually cause these “working” solutions to catch up to you.

              • mrtksn 7 hours ago

                You can always revisit that part of code if it doesn’t perform. For vast majority of code running on consumer devices there’s no difference between smart implementation and mediocre implementation. LLMs are great at being mediocre by default.

                As for security, that mostly stems from the architecture. LLMs mediocracy also helps with following industry conventions and best practices.

                In my case I never get the code being written at once, instead I make LLMs write pieces that I put together myself. Never got used to copilot or Cursor, I feel in control only with the chat interface.

          • smw 5 hours ago

            Oh come on, I'm not an "AI believer", but it regularly does things for me like write complex SQL queries that I can then verify are correct. Just something like that will often save me 20-40 minutes over doing it manually. There is _something_ there, even if it's not going to replace the workforce anytime soon.

      • hedora 7 hours ago

        Either you’re using it wrong, or you are using the wrong tools.

        For search, try kagi fastgpt (no subscription required):

        https://kagi.com/fastgpt

        For code completion, I’ve found it’s not good at jumping hard hurdles, but it is a bit better than find replace (e.g. it finds things that are syntactically different, but semantically related), and can notice stuff like “you forgot to fix the Nth implementation of the interface you just extended”.

        It’s also good at “I need to do something simple in a language I do not know”.

        I’ve definitely encountered ai slop from coworkers. I’m sure they also produce stack overflow copy paste garbage too. Dealing with their newly-found increased “productivity” is an open problem.

        Insisting on strict static typing helps. The LLMs can’t help with that, and it forces a higher bar before compilation succeeds.

        • righthand 4 hours ago

          I shut off Kagi search LLM stuff because I don’t need an LLM typing stuff out while I’m literally looking at a list of results.

    • LaGrange 4 hours ago

      > 1) Wouldn't want to go back

      I went back. It sucks pretty bad, actually.

  • rsynnott 10 hours ago

    Lining up for whatever the next thing is. "Look, we know we said AR/VR was the next big thing in the late tens and LLMs were the next big thing in the early 20s, but quantum is the next big thing now. For real, this time!"

    (Not entirely sure what the next fad will be, but some sort of quantum computing thing doesn't feel unlikely. Lot of noise in that direction lately.)

    • akomtu 5 hours ago

      Quantum AI will be the next buzzword, once AGI loses steam.

    • sigmoid10 10 hours ago

      Curiously, all of these three (VR/AI/QC) are limited by hardware. But AI is the only one that has seen meaningful progress by just throwing more contemporary hardware at it. Sure, future hardware might bring advancements to all of them. But if you're making an investment plan for the next quarter, the choice is pretty obvious. This is why AI rules the venture capitalist sector instead of fusion or other long term stuff.

      • zamalek 5 hours ago

        Of the three, QC is different in that it's not a solution looking for a problem. If we ever scale QC to the point where it can do meaningful work (the "if" is doing a lot of work there - per your point about hardware), then I don't see it fumbling like the other two have. We have immediate and pressing needs that we know how to solve with QC. The other two are very much research coming up with cool toys, and product fucking around so that they can find out what to use them for.

  • dfxm12 9 hours ago

    Did they tell the M365 sales/marketing teams about this? My users get bombarded with sales pitches, new free trials and other commms about how wonderful copilot is. It's almost a full time job to manage people's expectations around this...

  • fny 7 hours ago

    Nadella is just saying we haven't yet seen a revolution yet measurable by 10% economic growth--he makes no statement about the future.

    Most people have no clue how to use AI or where to use it in their lives. There was a guy at work who was submitting command-like queries (give meeting summary) and complained about how it left out XYZ. Then I told him to ask "Give me the meeting summary with X, Y, Z" or "what did so and so say about Y."

    His mind was blown.

    We are in the first inning. We haven't figured out how to integrate this into everything yet.

  • unsupp0rted 12 hours ago

    Nadella is looking for the world to grow at 10% due to AI enhancement, like it did during the industrial revolution.

    That seems like a low bar because it already is- it's just not equally distributed yet.

    My own productivity has grown far more than 10% thanks to AI, and I don't just mean in terms of dev. It reads my bloodwork results, speeds up my ability to repair a leak in my toilet tank, writes a concise "no I won't lend you money; I barely know you" message... you name it.

    Normally all of those things would take much longer and I'd get worse results on my own.

    If that's what I can do at the personal level, then surely 10% is an easily-achievable improvement at the enterprise level.

    • geuis 11 hours ago

      All I hear is anecdotal statements from people claiming LLMs have made them some percent more productive. Yet few actually say how or demonstrate it.

      For the last year, I've tried all sorts of models both as hosted services and running locally with llama.cpp or ollama. I've used both the continue.dev vscode extension and cursor more recently.

      The results have been frustrating at best. The user interface of the tools is just awful. The output of any models from Deepseek to quen to Claude to whatever other model is mediocre to useless. I literally highlight some code that includes comments about what I need and I even include long explicit descriptions etc in the prompts and it's just unrelated garbage out every time.

      The most useful thing has just been ChatGPT when there's something I need to learn about. Rubber ducking basically. It's alright at very simple coding questions or asking about obscure database questions I might have, but beyond that it's useless. Gotta keep the context window short, or it starts going off the rails every single time.

      • alexvitkov 10 hours ago

        If LLM chatbots are making you vastly more productive in a field, you are in the bottom 20% of that field.

        They're still useful tools for exploring new disciplines, but if you're say a programmer and you think ChatGPT or DeepSeek is good at programming, that's a good sign you need to start improving.

        • agentultra 10 hours ago

          This. I shudder to think of the hubris of a programmer who doesn’t understand pointers prompting an AI model to generate low-level system code for them. Sure it might generate a program that appears to work. But is that human reading the code qualified to review it and prevent the model from generating subtle, non-obvious errors?

        • infecto 10 hours ago

          If you have to tell others that then perhaps some introspection for yourself might be helpful. Comes across more as denial than constructive commentary.

          I do believe the benefit decreases the more senior or familiar the work is but there is still a noticeable benefit and I think it largely depends on the velocity and life cycle of the product. I think you get less benefit the slower the velocity or the more mature of a product. To deny it like in your post is simply being an intellectual minimalist.

          • geuis 6 hours ago

            Again, show your evidence.

            You make a polite but still ad hominem "attack" about me instead of addressing my points with demonstrations of evidence.

            Make a video or blog article actually showing how your use of LLMs in coding is making you more productive. Show what it's doing to help you that has a multiplier effect on your productivity.

            • infecto 2 hours ago

              I was not replying to you so I hope your comment was not directed at me?

          • nyeah 9 hours ago

            I see a gap between "vastly more productive" and "noticeable benefit".

            • red-iron-pine 5 hours ago

              good shoes help me walk a bit faster, and for longer.

              they don't let me walk at the pace of a SUV.

              AI is like the good shoes. they help, and make many tasks a bit easier. but they can't make me into an SUV.

              and if they can, then no programmers will have jobs. which is the end-state of this whole LLM thing as far as I can tell.

      • vipshek 9 hours ago

        I would like to propose a moratorium on these sorts of “AI coding is good” or “AI coding sucks” comments without any further context.

        This comment is like saying, “This diet didn’t work for me” without providing any details about your health circumstances. What’s your weight? Age? Level of activity?

        In this context: What language are you working in? What frameworks are you using? What’s the nature of your project? How legacy is your codebase? How big is the codebase?

        If we all outline these factors plus our experiences with these tools, then perhaps we can collectively learn about the circumstances when they work or don’t work. And then maybe we can make them better for the circumstances where they’re currently weak.

        • azemetre 7 hours ago

          I feel like diet as an analogy doesn't work. We know that the only way to lose weight is with a caloric deficit. If you can't do this, it doesn't matter what you eat you won't lose weight. If you're failing to lose weight because of a diet you are eating too much, full stop.

          Whereas measuring productivity and usefulness is way more opaque.

          Many simple software systems are highly productive for their companies.

      • strangescript 10 hours ago

        I think its about scope and expectations. I have had some form of AI code completer in my neovim config for 3 years. It works flawlessly and saves me tons of keystrokes. Sure sometimes it suggests the incorrect completion but I just ignore it and keep coding as if it didn't exist. I am talking about line by line, not entire code blocks, but even that it does well at times.

        From what I have seen the people that have the most success have AI building something from scratch using well known tooling (read: old tooling).

        The problem is that doesn't immediately help most people. We are all stuck in crap jobs with massive, crusty code bases. Its hard for AI because its hard for everyone.

      • vel0city 6 hours ago

        I've been using Amazon Q Developer as it was provided and approved by my employer. It has been pretty good with Python codebases, Kubernetes configurations, and (not surprisingly) CDK/Cloudformation templates. I can pretty much just ask it "here's my python script, make everything I need to run it as a lambda, hook that lambda up to x, it should run in a vpc defined in this template over here", and it'll get all that stuff put together and its normally pretty solid code it generates. It seems to pull in a lot of the context of the project I have open. For instance, I can say "it should get those values from the outputs in other-cf-template.yml" and it knows the naming schemes and what not across templates, even if it didn't generate those templates.

        I might go back and tweak some stuff, add some extra tags and what not, but often its pretty good at doing what I ask.

        Sometimes its suggestions aren't what I was really wanting to do in my codebase, a handful of times it has made up methods or parameters of even well-known libraries. But usually, its suggestions are better than a basic IntelliSense-style autocomplete at least in my experiences.

        I haven't used many of the other developer assistant plugins like say GitHub Copilot. I couldn't really say which is better or worse. But I do think using Q Developer has made me faster in many tasks.

        I wouldn't expect a tool that doesn't have access to the context of my editor and the files I have open to be very useful for actually coding. There's a lot of context to understand in even a basic application. If you're just asking a locally running app in ollama "give me a method to do x", don't be surprised if it doesn't know everything else happening in your app. Maybe it'll give you a halfway decent example of doing something, but devoid of how it actually plugs in to whatever you're making it might be entirely worthless.

      • infecto 11 hours ago

        Just in the past couple months there have been a number of "I am a senior/principal engineer and this is how I use LLMs". I would agree that the tools are not optimal yet but every iteration has improved for me.

        Maybe whatever language you are coding it or whatever project you are working on is not a good fit? It is an equally perplexing situation for myself when I hear anecdotes like yours which don't align with my experience. The fact that you say everything is garbage calls into question either how you are using the tool or something else.

        I can reliably use cursor's composer to reference a couple files, give a bullet list of what we are trying to do and point it to one of the better models and the output is junior engineer level or better output. When I say junior, I mean a junior who has experience with the codebase.

        • andy24 11 hours ago

          What kinds of projects are you working on? My experience is jot very good with these tools (expanded in a sibling comment).

          • infecto 10 hours ago

            Generally a lot of web-dev which is where I would assume LLMs shine the best. I noted elsewhere but I think it depends a lot on the age of the product and the level of velocity. For early life products where the speed of your velocity matters, I think you can get the most benefit. The more mature the product and the slower the team implements features, the benefits are still measurable but not as high.

            • andy24 9 hours ago

              Ah yeah, I can totally see how it can be useful for churning put tons of code. Even without copy-paste, just generating a ton of references and rewriting/improving them. Anecdotally, I’ve tried asking deepseek to review a few files of my code — it wasn’t bad at all, though not without false positives.

      • F7F7F7 6 hours ago

        Seriously. It’s like half of the people in this thread are living in a completely different world.

        And this is coming from someone who uses LLMs daily at the subscription, API (vscode and 3 nextjs apps) and local level. I have a custom langchain stack, prompt repo, you name it. And regardless of how little or how much I use what I have, or what soup de jour prompt or process (from Keep it simple to Prompt enhancers) I can’t say it’s made a meaningful difference in my life. Even with all of the customization and planning.

        It’s a great search engine though.

        • Earw0rm 2 hours ago

          Would it look like such a good search engine if the actual search engines hadn't progressively broken themselves over the last 15 years?

          I swear half the time when I use it to look up the nuances of system API stuff, it's replaying forum, mailing list or Stackoverflow conversations that Google ought to be able to find but somehow can't.

      • ta1243 10 hours ago

        > All I hear is anecdotal statements from people claiming LLMs have made them some percent more productive. Yet few actually say how or demonstrate it.

        It's very difficult to measure productivity of most people, certainly most people in office jobs, so while you can have a gut feeling that you're doing better, it's no more measurable than pre-AI individual productivity measurement was

        • skydhash 10 hours ago

          It’s not really about objective measurements, but practical applications. Like try this in the following manner and compare it to your previous workflow. Sensible advices like the ones found in The Pragmatic Programmer.

          • ta1243 10 hours ago

            Sure, so it's always going to be annecdotal. That doesn't mean the benefits don't exist, just means they can't be objectively measured. Just like we can't objectively measure the output of a single knowlege worker, especially output on a single day

      • andy24 11 hours ago

        I have a similar experience. Tried to use it for real work and got frustrated by the chat’s inability to say “I don’t know”. It’s okay for code snippets demonstrating how something can be used (stack overflow essentially), also code reviews can be helpful if doing something for the first time. But they fail to answer questions I’m interested in like “what’s the purpose of X”.

      • xienze 9 hours ago

        I agree with the other commenter that said if you're "vastly" more productive as a developer due to AI, you probably weren't that good to begin with. Otherwise, please provide concrete examples.

        Myself, I do find it quite useful in a few respects. First and foremost, as a "better Google/StackOverflow." If something's not working, I can describe my exact scenario and usually get pointed in the right direction. Sometimes the LLM just wastes my time by very confidently telling me some function/library that solves my exact problem exists when in fact it doesn't.

        Second, IntelliJ's local LLM is sort of a smarter autocomplete. It makes some outright wrong suggestions, but when there's areas where I have to do a lot of repetitive tasks that follow a simple pattern (like for instance, mapping fields from one type of object to another), it does a pretty good job of making correct suggestions. I definitely appreciate it but it's certainly not doing things like writing a significant portion of code in my style.

      • llm_trw 10 hours ago

        I fixed the hinge in my oven by giving perplexity.com the make and problem. I saved an hour on the phone calling people to organise a visit some time in the next week.

        Maybe you should stop using the Ai slop tools that don't work?

        • sharpshadow 7 hours ago

          Very likely you would have been as successful without AI to fix your oven hinge yourself, there is tons of content about that online.

          • llm_trw 2 hours ago

            No. I'd already spend 30 Min looking at how to solve it myself. The search on perplexity was a hail marry before I started calling handymen.

    • thagsF 11 hours ago

      And Henry Ford would reply: "Who is going to buy the cars?"

      We have been living in a fake economy for quite some time where money is printed and distributed to the "tech" sector. Which isn't really "tech", but mostly entertainment (YouTube, Netflix, Facebook, ...).

      Growth of the economy means nothing. The money that has been printed goes to shareholders. What the common man gets is inflation and job losses.

      If you want to grow the real economy, build houses and reduce the cost of living.

      • sharpshadow 7 hours ago

        A friend tried to tell me China has a real estate crisis, because the value of houses is dropping due to building to many and people are losing on their investments. I asked him if he is sure cheap and available housing is a crisis.

        • treis 5 hours ago

          Everyone in the industry losing their shirts and going out of business is a crisis. It happened 15 years ago in the US and we still haven't made it back to mid 90s level of housing starts.

      • coliveira 9 hours ago

        > If you want to grow the real economy, build houses and reduce the cost of living.

        Yes, I wonder why it is so hard for Western countries to understand that there's no future in a place where housing is more expensive than your average salary. If may look cool for a few years until most people have left or are living on the streets.

        • tonyhart7 9 hours ago

          "there's no future in a place where housing is more expensive than your average salary."

          don't get me wrong, everyone want cheaper housing but not their house

          • sudosysgen 8 hours ago

            I for one would love it. If I have to sell housing then I have to buy housing, it's not a benefit to me unless I reduce my quality of life.

        • bufferoverflow 7 hours ago

          Plenty of housing. The problem is, people want cheap housing in places where everyone wants to live. I don't think that will happen any time soon.

          • coliveira 6 hours ago

            This is a non-sense that spreads because of North American style of housing. If you're talking about sprawling suburban houses then you're right. But big cities have provided reasonable housing for lots of workers for centuries. The only thing you need is to build more apartments in the cities that have an excess of job positions.

            • bufferoverflow an hour ago

              No, you can't just "build more apartments". For these new inhabitants you will need more grocery stores, more bus/subway stops and overall transportation, more hospitals, more firefighters, more restaurants, more gyms, more tennis courts, more of everything.

              • coliveira 41 minutes ago

                Of course. Big cities with all this infrastructure are nothing new. They existed in the past and are big in alive in Asia and other parts of the world. Only in North America we have this bizarre world where it seems like a strange thing to build cities and provide infrastructure for workers!

          • hollandheese 4 hours ago

            There is no cheap housing anywhere in the entire state of California. In the worst and poorest parts of the state where are basically no jobs or anything the housing is still way more expensive than anyone can afford.

            • bufferoverflow 2 hours ago

              So move out of there. Plenty of cheap housing in the country.

    • toomuchtodo 12 hours ago

      You should be curious why Nadella is looking for the world to grow at that rate. That’s because he wants Microsoft to grow into $500B/year in revenue by 2030, and it will be challenging without that economic growth to grow into that target. You can grow into a TAM, try to grow or broaden the TAM, or some combination of both. Without AI, it is unlikely the growth target can be met.

      https://www.cnbc.com/2023/06/26/microsoft-ceo-nadella-said-r...

    • toberoni 9 hours ago

      Annual growth rates during the Industrial Revolution where way lower than 10%. In the 18th century it was well below 1%, during the 19th century it was on average at 1-1.5% (the highest estimates go up to 3% annual growth for certain decades close to 1900).[0][1][2]

      Some regions or sectors might have experienced higher growth spurts, but the main point stands: the overall economic growth was quite low by modern standards - even though I don't think GDP numbers alone adequately describe the huge societal changes of such sustained growth compared to agrarian cultures before the Industrial Revolution.

      [0] https://web.archive.org/web/20071127032512/http://minneapoli... [1] https://www.bankofengland.co.uk/explainers/how-has-growth-ch... [2] https://academic.oup.com/ereh/article/21/2/141/3044162

    • jononor 10 hours ago

      What is your personal productivity metric by which you have more than 10% increase? More money earned, less money spent, fewer working hours for same income, more leisure time? It needs to be something in aggregate to mean something related to what Nadella meant. There are many individual task which LLM system can help with. But there is also may ways for those gains to fail to aggregate into large overall gains. Both on personal level and on corporate, and economy wide level.

    • threeseed 11 hours ago

      Going to safely assume you've never worked at an enterprise.

      Because improving the productivity of every employee by 10% does not translate to the company being 10% more productive.

      Processes and systems exist precisely to slow employees down so that they comply with regulations, best practices etc rather than move fast and break things.

      And from experience with a few enterprise LLM projects now they are a waste of time. Because the money/effort to fix up the decades of bad source data far exceeds the ROI.

      You will definitely see them used in chat bots and replacing customer service people though.

    • suraci 11 hours ago

      I think the 'grow at 10%' refers to the incremental part of the entire world/market.

      during the industrial revolution(steam/electricity/internet), the world was growing, there're trains, cars, netflix

      bussiness grown with productivity growing, even so, we lived through 2 world wars and dozens of economic crisis

      but now is very different, when you repair the tank with LLM's help, when the labour value of repairers is decreased, there's no addition value are produced

      there's a very simple thought experiment abt the result of productivity growing alone:

      let's assume robotics become to a extremely high level, everything humen work can be reduced to 1/100 with help of robots, what will happen next?

      • tacomadome 10 hours ago

        You’re describing exactly what happened during both the Industrial Revolution and the advent of computer automation.

        Prior to computerization and databases, millions of people were required for filing, typing, and physically transporting messages and information. All of those jobs, entire fields of work were deleted by computerization.

        • Earw0rm 2 hours ago

          Even "computer" was originally a job title. For a person.

      • ta1243 10 hours ago

        > let's assume robotics become to a extremely high level, everything humen work can be reduced to 1/100 with help of robots, what will happen next?

        We work 35 hour years instead of 35 hour weeks?

        • bcrosby95 10 hours ago

          Lol, ever the optimist I see.

          • ta1243 5 minutes ago

            It's always worth reminding people that wealth accumulation in the insanely rich isn't the only option

    • mvandermeulen 5 hours ago

      How close do we need to be for you to help a brother out? Feeling seriously unsupp0rted right now

    • bbarnett 11 hours ago

      It also gets all of these things wrong, like not paying attention to models of toilets and quirks for their repair, often speaking with an authoritative voice and deceiving you on the validity of its instructions.

      All of the things you site are available via search engines, or better handled with expertise so you know how much of the response is nonsense.

      Every time I use AI, it's a time waste.

      • unsupp0rted 11 hours ago

        Every time I contact an enterprise for support, the person I'm talking to gets lots of things wrong too. It takes skepticism on my part and some back and forth to clean up the mess.

        On balance AI gets more things wrong than the best humans and fewer things wrong than average humans.

        • xienze 8 hours ago

          The difference is that a human will tell you things like "I think", "I'm pretty sure" or "I don't know" in order to manage expectations. The LLM will very matter-of-factly tell you something that's not right at all, and if you correct them the LLM will go and very confidently rattle off another answer based on what you just said, whether your were telling it the truth or not. If a human acted that way more than a few times we'd stop asking them questions or at least have to do a lot of "trust but verify." LLMs do this over and over again and we just kind of shrug our shoulders and go "well they do pretty good overall."

          • smw 5 hours ago

            I can't count the number of times I've had a support person confidently tell me something that is obviously not applicable to my problem and makes completely flawed assumptions about cs/physics/networking/logic.

            I get a lot of correct answers from llms, but sometimes they make shit up. Most of the time, it's some function in a library that doesn't actually exist. Sometimes even the wrong answers are useful because they tell me where to look in the reference docs. Ask it to search the web and cite sources, makes it easier to verify the answer.

            I don't appreciate what's going on with AI art and AI generated slop, but the idea that they aren't a useful tool is just wild to me.

            • xienze 2 hours ago

              I'm not saying it's not useful, I'm saying that we hold humans giving us answers to a much higher standard than LLMs.

        • Eggpants 10 hours ago

          AI is a lossy data compression technique at best. One can always tell when an AI cheerleader/ex blockchain bro has hitched their financial wagon to this statistic based word vomit grift.

          • unsupp0rted 10 hours ago

            Please elaborate, preferably without breaking HN guidelines about dismissive name-calling

    • maigret 10 hours ago

      Who suddenly knows how to measure developer productivity? I thought this was impossible.

    • deadbabe 9 hours ago

      Unless you’re producing 10% more money with AI you’re not doing shit.

  • bognition 12 hours ago

    Time to sort NVDA?

    • toomuchtodo 12 hours ago

      A fellow degenerate gambler I see. The market can remain irrational longer than you can remain solvent, trade with caution. Being early is the same as being wrong.

      • bognition 10 hours ago

        A common hypothesis for why Nvidia is so hot is because they have an effective monopoly on the hardware to train AI models and it requires a crap ton of hardware.

        With DeepSeek it’s been demonstrated you can get pretty damn for a lot cheaper. I can only imagine that there are tons of investors thinking that it’s better to invest their dollars in undercutting the costs of new models vs investing billions in hardware.

        The question is, can Nvidia maintain their grip on the market in the face of these pressures. If you think they can’t, then a short position doesn’t seem like that big of a gamble.

        • rangestransform 7 hours ago

          it’s effectively a software moat wrt. GPU programming, there’s nothing stopping AMD from catching up besides insufficiently deep pockets and engineering culture

          • hedora 7 hours ago

            Not sure why AMD’s software side gets so much flack these days. For everything other than AI programming, their drivers range from fine to best in class.

            I have an AMD minipc running linux that I use for steam gaming, light development, etc. The kernel taint bit is off.

            There is one intel device on the pci/usb buses: wifi/bt, and it’s the only thing with a flaky driver in the system. People have been complaining about my exact issue for something like 10 years, across multiple product generations.

            • rangestransform 4 hours ago

              Nobody who controls the purse strings cares about the kernel taint bit if their model doesn’t train, if they’re burning developer time debugging drivers, if they have to burn even more dev time switching off of cuda, etc.

              If AMD really cared about making money, they would’ve sent MI300s to all of the top CS research institutions for free and supported rocm on every single product. Investing any less than nvidia, the trillion dollar behemoth, is just letting big green expand their moat even more.

              • hedora 10 minutes ago

                As I said, other than AI. The management made a big bet on crypto when nvidia made a big bet on AI.

                That didn’t work out all that well in the medium term (it did in the short term), though it gave them runway to take a big chunk of intel’s server market.

                Whether or not that was a good move, it’s not evidence of some engineering shortcoming.

        • mixmastamyk 4 hours ago

          A short position is always a gamble, because you could lose more than everything.

      • short_sells_poo 11 hours ago

        Highly regarded people unite :D

        More seriously though: unless you have privileged information or have done truly extensive research, do not short stocks. And if you do have privileged information, still don't short stocks because unless you have enough money to defend yourself against insider trading like Musk and it's ilk, it's not going to be worth it.

        It's perfectly reasonable to determine that a particular high growth stock is not going to perform as well going forward, in which case I'd shift allocation to other, better candidates.

        Generally, being long equities is a long term positive expected value trade. You don't have to time the market, just be persistent. On the other hand, as you correctly alluded to, shorting equities requires decently precise timing, both on entry and exit.

    • dijit 11 hours ago

      I think its probably foolish to short nvidia until theres at least echoes of competition.

      AMD wants it to be them, but the reality is that the moat is wide.

      The closest for AI is Apple, but even then, I’m not certain its a serious competitor; especially not in the datacenter.

      For Gaming there’s practically no worthwhile competition. Unreal Engine barely even fixes render bugs for Intel and AMD cards, and I know this for fact.

      FD: I’m recently holding shares in nvidia due to the recent fluctuation, and my own belief that the moat is wider than we care to believe, as mentioned.

    • dpflan 12 hours ago

      Using “bubble” sort? ;)

      • jeyoor 11 hours ago

        The combination of high and climbing price to earnings ratios for a smaller subset of tech firms, outsize retail investment in tech (cloaked by people buying crypto), and macro environment factors like high interest rates stimulating risky lending has me swapping this bubble toward the top of the list.

        See further: https://www.morningstar.com/news/marketwatch/20250123167/six...

      • HPsquared 11 hours ago

        Bubble sort is very resource-hungry...

    • suraci 11 hours ago

      it's very dangerous, shorting in a market where a gamma squeeze can occur is extremely dangerous

      other markets s like Taiwan are preferable

  • bsenftner 12 hours ago

    The "elephant in the room" is that AI is good enough, it's revolutionary in fact, but the issue now is the user needs more education to actually realize AI's value. No amount of uber-duper AI can help an immature user population lacking in critical thinking, which in their short shortsightedness seek self destructive pastimes.

    • geuis 11 hours ago

      It's not "good enough", it's mostly overhyped marketing garbage. LLM models are mostly as good as they're going to get. It's a limitation of the technology. It's impressive at what has been done, but that's it.

      It doesn't take billions of dollars and all human knowledge to make a single human level intelligence. Just some hormones and timing. So LLMs are mostly a dead end. AGI is going to come from differenst machine learning paradigms.

      This is all mostly hype by and for investors right now.

      • infecto 11 hours ago

        It's pretty good for a whole class of problems that humans currently do.

      • literalAardvark 11 hours ago

        LLM direct response models are quite mature, yes (4o)

        LLM based MoE architectures with some kind of reasoning process ( Claude 3+, o series, R1, grok 3 with thinking ), are the equivalent of v0.2 atm, and they're showing a lot of promise.

    • petee 10 hours ago

      I spent more time yesterday trying to get "AI" to output runnable code, and retyping, than if I had just buckled down and done it myself.

      But I don't think you can blame users if they're given an immature tool, when it really is on companies to give us a product that is obvious to use correctly.

      Its not an exact analogy, but I always like to think of how doors are designed - if you have to put a sign on it, its a bad design. A well designed door requires zero thought, and as such, if "AI" usage is not obvious to 99% of the population, its probably a bad design.

      • bsenftner 5 hours ago

        Think of it like you're talking to someone so smart that they answer before you're finished explaining, and get the general idea wrong, or seem really pedantic and your misplaced use of a past tense verb that should have been active tense causes then to completely reinterpret what you're talking about. Think of our current LLMs like idiot savants, and trust them as much.

        I don't use AI to write code if that code is not short and self contained. It's great at explaining code, great at strategy and design about code. Not so much at actually implementing code larger than 1/4 to 1/3rd it's output context window. After all, it's not "writing code", it's statistically generating tokens that look like code it's seen before. It's unknown if the training code in which the LLM is statistically generating a reply actually ran, it could have been pseudo code explaining that computer science concept, we don't know.

        People seem to want a genie that does what they are thinking, and that is never going to work (at least with this technology.) I'm really talking about effective communications, and understanding how to communicate with a literal unreal non-human construct, a personality theater enhanced literary embodiment of knowledge. It's subtle, it requires effort on the user's side, more than it would if one were talking to a human expert in the area of knowledge you operate. You have to explain the situation so the AI can understand what you need, and developers are surprising bad at that. People in general are even worse at explaining. Implied knowledge is rampant in developer conversation, and an LLM struggles with ambiguity, such as implied references. Too many same acronyms in different parts of tech and science. It does work, but one really needs to treat LLMs like idiot savants.

    • rsynnott 10 hours ago

      "You're holding it wrong" only goes so far.

orzig 12 hours ago

Remember that there is a lot of nuance to these sorts of deals.

I don’t have any domain knowledge, but I recently saw an executive put in restaurant reservations for five different places the night of our team offsite, so he would have optionality. An article could accurately claim that he later canceled 80% of the teams eating capacity!

  • ZeroGravitas 12 hours ago

    But if it was reported in the press that your team was going to eat 5 meals at the same time before it was revealed that it was just an asshole screwing over small businesses, then that correction in eating capacity should be reported.

    • FrustratedMonky 11 hours ago

      "should "

      But often not.

      That was the point in the parent. How this is being reported is bit skewed.

      And also there is the problem that nobody reads corrections. Lies run around the globe before the Truth has tied its shoelaces, or some quote like that.

PeterStuer 6 hours ago

Microsoft said they were going to spend 80 Billion on AI data centers, and they confirmed this again, so there is no 'scaling back'.

Speculation:

I suspect after they observed the incredible speed with which Grok was able to build out a leading edge AI infrastructure themselves in Memphis, far faster than what the traditional data-centers could offer it, Microsoft might have had an epiphany and went 'wait, why are we not doing this?'

  • mistrial9 6 hours ago

    somehow, Memphis, an ancient Egypt royal center and core of a grand imperial slave state.. is such a fitting name for these developments

    • kragen 6 hours ago

      Ancient Egypt was neither imperial nor a slave state; though it did have slavery, most Egyptians were not slaves.

      • mistrial9 6 hours ago
        • kragen 3 hours ago

          These pages are somewhat out of date, but they confirm what I said: "Ancient Egypt was a peasant-based economy and it was not until the Greco-Roman period that slavery had a greater impact."

          • mistrial9 2 hours ago

            no - by not specifying which Kingdom you were referring to, it revealed an incomplete, obstinate, reactive answer.. (as I have also done many times) So my remedy to that today was to read a bit before further damage is done. Non-English pages might also be useful on this massive and historical topic

laserbeam 13 hours ago

I've read the first 2 paragraphs 5 times and I still can't tell if Microsoft was renting datacenters and paying for them, or if Microsoft was leasing out datacenters and decided "no more AI data centers for you, 3rd parties".

And digging further into the article didn't help either.

  • walrus01 13 hours ago

    The first one, they were acquiring datacenter space.

iainctduncan 6 hours ago

The thing that is driving me crazy in all these threads is that we have invariably a bunch of programmers saying "I use it for coding and I would never go back" but this is orthoganal to the question of whether it's a good business.

If you use gen-AI but don't pay the real cost, you're not really a data point on the economics of it. It could be useful to programmers and still be a terrible business if the cost of giving you that usefulness is more than you (or your employer) would pay.

  • Valord 5 hours ago

    My impression is that costs will continue to go down. Large investments are unlikely to be profitable for these businesses. Whoever is dumping billions into this is unlikely to get their money back. The new tooling, models, discoveries seem to be commoditized within months. There are no moats. If things keep going this way there will never be a point where employers (or anybody for that matter) have to pay the real cost.

helsinkiandrew 12 hours ago

Meanwhile: "Apple Says It Will Add 20k Jobs, Spend $500B, Produce AI Servers in US" https://www.bloomberg.com/news/articles/2025-02-24/apple-say...

https://news.ycombinator.com/item?id=43158168

  • numbsafari 12 hours ago

    This has nothing to do with supply/demand, and everything to do with geopolitics.

    • FrustratedMonky 11 hours ago

      You think they would spend 500B without thinking there will be any demand?

      • rsynnott 10 hours ago

        You're missing the key phrase. "Says it will". Companies, of course, say all sorts of things. Sometimes, those things come to pass. But not really all that often.

      • sangnoir 4 hours ago

        It cost Apple $0 to say they'll spend $500B.

      • dboreham 5 hours ago

        The 500B is just something to put on a press release.

  • turnsout 12 hours ago

    If Apple can pull off "Siri with context," it will completely annihilate Microsoft's first mover advantage. They'll be left with a large investment in a zero-margin commodity (OpenAI).

    • asadhaider 12 hours ago

      Unfortunately Siri remains near useless at times even with Apple Intelligence™®

      • helsinkiandrew 12 hours ago

        The "LLM Siri" hasn't been rolled out even in beta, estimates reckon 2026

        https://www.macrumors.com/2024/11/21/apple-llm-siri-2026

        • joshstrange 12 hours ago

          Correct, Apple changing the UI before changing the backend might be one of the more stupid things I’ve ever seen.

          • drowntoge 7 hours ago

            The messiest launch ever. The renewed UI makes it easy to assume that the LLM-backed Siri is already here but just isn't much better than the old one. A marketing disaster.

        • turnsout 11 hours ago

          Yes, although before full "LLM Siri," Apple promised an "enhanced" Siri with contextual understanding in iOS 18. The clock is ticking though—WWDC will be here before you know it.

      • ghaff 9 hours ago

        Just like all the other voice assistants.

    • rs186 11 hours ago

      If history is our guide, that's never going to happen.

    • CuriousSkeptic 11 hours ago

      Apple will not beat Microsoft in any capacity here

      Microsoft has all the context in the world just waiting for exploitation: Microsoft Graph data, Teams transcripts and recordings, Office data, Exchange data, Recall data(?), while not context per se even the XBox gaming data

      • helsinkiandrew 10 hours ago

        > Apple will not beat Microsoft in any capacity here

        I'm sure MS will provide AI to business, but if Apple get things right, they'll be the biggest provider of AI to the masses.

        With a Siri that knows your email, calendar, location, history, search history, ability to get data from and do things in 3rd party Apps (with App Intents) and if it runs on your phone for security, it could be used by billions of consumers, not a few hundred million MS office users.

        What was that restaurant I went to with Joan last fall? Send linkedin requests to all the people I've had emails from company X.

        Of course they could take too long or screw things up.

        • tallanvor 9 hours ago

          I wouldn't be sure about that.

          Siri's success would greatly depend on app developers adopting intents. The major players are going to be hesitant to give Apple that much access to data - the EU may help push them that way, but even still, Microsoft, Google, Facebook, and others want their AIs to be the one people use.

          Siri is also limited to Apple products, and while lots of people have iPhones, many of them still have a PC, where Siri doesn't work.

          Companies are also very concerned about employees accidentally or purposefully exfiltrating data via AI usage. Microsoft is working hard to build in guardrails there and Intune allows companies to block Siri intents, so Apple would have to do a lot to reassure corporate customers how they'll prevent Siri from sending data to a search engine or such.

          But you might be right. I think it's way too early to tell, and that's why so much money is being poured into this. All the major players feel that they can't afford to wait on this.

          • turnsout 8 hours ago

            A lot of developers have already adopted intents to support Shortcuts and existing Siri. There will be tremendous business pressure to be able to fit into a request like "Get me a car to my next appointment"

      • svnt 11 hours ago

        I’m sorry but what are you saying.

        How are any of these unique competitive advantages over iCloud, App Store, Safari, and just generally more locked-in high margin mobile platform users than anyone?

        • michaelt 11 hours ago

          If the money is in providing AI to businesses, to do things humans were previously paid to do - then Microsoft would be in a much better position than Apple, because they already have a big foothold whereas Apple has never really targeted business use.

crowcroft 6 hours ago

My theory:

- OpenAI and Oracle partnership: When Microsoft is 'at capacity' more demand can go to Oracle, so now Microsoft don't need to rapidly add capacity with leases (which likely have a lower ROI than Microsoft owned and operated centres).

- Longer term investments are still going ahead: Microsoft aren't cutting back on capex investment. They don't want to lease if they don't have to, but long term still see a huge market for compute that they will want to be a key supplier of.

I think Microsoft's goal here is to focus on expanding their capex to be a long term winner instead of chasing AI demand at any cost in the short term. Likely because they already think they're in a pretty strong position already.

light_triad 6 hours ago

Makes sense given what Nadella recently said:

- Self claiming some AGI milestone is "non-sensical benchmark hacking"

- The AI model race won't be winner-take-all

- The real benchmark is if the world is growing at 10%

MSFT is putting its focus on AI adoption in the app layer and "at-scale consumer properties"

Full conversation: https://www.youtube.com/watch?v=4GLSzuYXh6w

zkmon 12 hours ago

The hype starts to head down towards reality.

  • smetj 12 hours ago

    No it's one level deeper. The exclusive claim to the "hype" is heading down towards reality.

Aurornis 7 hours ago

By itself this doesn’t mean much. If they’re reallocating funds to different data centers or building their own then this could be a wash.

ffjffsfr 13 hours ago

key piece of info at the end, looks like they are leaving the spending on datacenter to OpenAI

> Microsoft’s alliance with OpenAI may also be evolving in ways that mean the software giant won’t need the same kind of investments. In January, OpenAI and SoftBank Group Corp. announced a joint venture to spend at least $100 billion and perhaps $500 billion on data centers and other AI infrastructure.

jimmySixDOF 12 hours ago

In a recent Dwarkesh podcast this week Nadella was just commenting on how they expected to benefit from reduced DC rental pricing and were preparing for Jevon's paradox to max out capacity. I guess they are calculating a ceiling now.

feverzsj 10 hours ago

Seems consumers just hate every product with AI functionality.

  • lompad 9 hours ago

    More like consumers hate every product where AI gets shoved in their face without any thought and --

    Okay, maybe you're right.

  • boshalfoshal 6 hours ago

    The consumer market doesnt matter much here, imo.

    I think most companies know b2b is the most lucrative segment for AI because it reduces one of their top costs - people. Companies selling AI are basically just selling the ultimate automation tool, which (in theory) is massive value for companies. Having a nice consumer product is a side gig.

smetj 12 hours ago

> Wall Street stepped up its questions about the massive outlays after the Chinese upstart DeepSeek released a new open-source AI model that it claims rivals the abilities of US technology at a fraction of the cost.

And that's the crux of it and that's also why DeepSeek was such a big deal.

everfrustrated 11 hours ago

This could simply be datacenter deals started years ago that they are pulling out of now they have larger AI-optimized DC's being commissioned in places more suited to faster & larger power availability.

strangescript 10 hours ago

OpenAI is pivoting away from MS. MS also has their own internal AI interests. Need to frame this for investors that doesn't look like we are losing out. "Nadella doesn't believe in AI anymore". Done and done.

blonderoast 10 hours ago

a lot of this sounds like a normal course of business and stuff that msft does all the time. i don't understand the openai drama speculation on here. msft continues to have right of first refusal on openai training and exclusivity on inferencing. if someone else wants to build up openai capacity to spend money on msft for inferencing, msft would be thrilled. they recognize revenue on inferecing not training at the moment, so it's all upside to their revenue numbers

DisjointedHunt 7 hours ago

Old news and incorrect the way it is phrased : https://x.com/dylan522p/status/1894050388145508586

I'm not sure why there is so much poor reporting on accelerator demand recently, it seems there are a lot of people looking to sell a message that isn't grounded in reality.

  • 01100011 6 hours ago

    Quite a lot of money is at stake in the control for retail investors' minds.

    Lots of stupid takes get amplified by people who lack background. Look at the recent Intel/TSMC/Broadcom merger rumors. The story was "Canada could join the US" level of stupid to anyone with experience anywhere near chip fabrication but it still circulated for several days. Also, look at what it did to the stock price of INTC. Lots of money made and lost there.

jr-ai-interview 10 hours ago

It's sad, people are already recklessly rearranging business logic via AI in key medical software systems due to global outsourcing linguistic reasons.

I have so much intense hatred for pg and Sama right now I rarely come to this shit show of a site

bob1029 13 hours ago

> a couple of hundred megawatts.

So, something approximating an entire region or 10~100k H100s.

drexlspivey 13 hours ago

I guess Satya is not good for his $80B (for project Stargate) any more.

  • fny 12 hours ago

    He is: this refers to spending Microsoft had planned on its own outside of Stargate. It says so in the article.

    • skywhopper 11 hours ago

      In other words “Stargate” is itself a lie: no new spending, just a political repackaging of existing plans.

      • _heimdall 11 hours ago

        That's nothing new, politicians lie all the time.

        Everyone's response when a politician announcing some supposed massive spending program should be to say "show me the appropriation bill and receipts, then we can talk."

        • shigawire 10 hours ago

          Appropriation bills don't go as far as they used to anymore as well...

      • fny 8 hours ago

        Microsoft has committed $100B to Stargate. These cuts are meaningless by comparison.

m3kw9 7 hours ago

It’s additive math, is the overall plus or minus. There is always gonna be some push and pull.

“ TD Cowen posited in a second report on Monday that OpenAI is shifting workloads from Microsoft to Oracle Corp. as part of a relatively new partnership. The tech giant is also among the largest owners and operators of data centers in its own right and is spending billions of dollars on its own capacity. TD Cowen separately suggested that Microsoft may be reallocating some of that in-house investment to the US from abroad”

decimalenough 13 hours ago

Not surprised. You need massive amounts of compute for training LLMs from scratch, but practical uses of LLMs are increasingly focused on using existing models with minimal or no tweaks, which requires far less computing power.

  • belter 13 hours ago

    I would like to note Bloomberg pulls these types of FUD before every NVDA earnings release. The last one they did was the false reports on Blackwell technical issues.

    They have been doing it for years

    "Bloomberg fined €5m over report of fake news release" - https://www.ft.com/content/32013b6a-202f-11ea-b8a1-584213ee7...

    "Supermicro Statement on Bloomberg’s Claims" - https://www.supermicro.com/en/pressreleases/supermicro-state...

    After Bloomberg reported a subpoena was sent to Nvidia and other unnamed technology companies...

    "Nvidia Denies It Was Subpoenaed In Justice Department Antitrust Probe" - https://www.forbes.com/sites/antoniopequenoiv/2024/09/04/nvi...

    • kjw 13 hours ago

      Completely agree with you re: Bloomberg's somewhat shady history of reporting. However, in this case, the article is citing a research note written by TD Cowen equity research analysts.

      For example, here's another article from MarketWatch citing this same research note -- https://www.marketwatch.com/story/the-research-note-thats-ra...

      Leads me to lend more credence to the news even though OP is linking Bloomberg.

    • suraci 13 hours ago

      This is indeed worth noting.

      Bloomberg seems to be a player in the market, or at least it is affected by players in the market.

      It always publishes news in good rhythm with the market.

      this article is just a reference, still need to wait for Microsoft's announcement

      • isoprophlex 12 hours ago

        so, historically speaking, who should i short? ;)

        • wizzwizz4 12 hours ago

          Unless you learn of a Bloomberg article before it goes out, you shouldn't.

        • suraci 12 hours ago

          when there's a scam, don't be scammed

    • intended 12 hours ago

      I’ve worked in equity research, and an accusation of FUD is an accusation of intentionally malign behavior.

      Getting reporting wrong is not the same as FUD.

      • belter 12 hours ago

        "Years later, Bloomberg doubles down on disputed Supermicro supply chain hack story" - https://www.datacenterdynamics.com/en/news/years-later-bloom...

        • intended 10 hours ago

          Sorry I forgot.

          Yes, this is absolutely FUD.

          This is the insistence to spread FUD for … some uncertain aim.

          Perhaps it hitting all these various firms, in order to leverage their strong reputation, to cause the price to drop, allowing someone - perhaps Bloomberg himself, to make a profit off of it.

          No no, I know I sound like a conspiracy theorist now. But my eyes were opened by the sharing of the story.

          The fact that FT, WSJ, Fox or a million other sites haven’t latched onto this obvious scheme, is heinous, and once again a sign of our completely captured media.

      • mvdtnz 12 hours ago

        When you double down on the incorrect reporting instead of retracting or correcting it, as Bloomberg did with their ludicrous spy chip story, it becomes FUD regardless of your initial intent.

        • intended 10 hours ago

          To what purpose? Enron spread FUD to further its goals.

          We can INVENT goals here, to FIT the conclusion we have already reached - that Bloomberg is spreading FUD.

          But this is because we have already concluded, and are now finding things to fit our conclusion.

nokun7 9 hours ago

So, Microsoft’s move to ditch leases for “a couple hundred megawatts” of data center capacity, as noted in TFA, is a pretty intriguing shift—and it’s not just a random cutback. Per some reports from Capacity Media and Analytics India Magazine, it looks like they’re pulling some of their international spending back to the U.S. and dialing down the global expansion frenzy. For context, that “couple hundred megawatts” could power roughly 150,000 homes, (typical U.S. energy stats) so it’s a decent chunk of capacity they’re letting go.

IMO it's not a full-on retreat—Microsoft’s still on track to drop $80 billion this fiscal year on AI infrastructure, as they’ve reaffirmed. But there’s a vibe of recalibration here. They might’ve overcooked their AI capacity plans, especially after being the top data center lessee in 2023 and early 2024. Meanwhile, OpenAI—Microsoft’s big AI partner—is reportedly eyeing other options, like Project Stargate with SoftBank, which could handle 75% of its compute needs by 2030 (per The Information report). That’s a potential shift in reliance that might’ve spooked Microsoft into rethinking its footprint.

Also it seems they're redirecting at least some costs - over half that $80 billion is staying stateside, per Microsoft’s own blog, which aligns with CEO Satya Nadella’s January earnings call push to keep meeting “exponentially more demand.” It’s a pragmatic flex—trim the fat, dodge an oversupply trap, and keep the core humming. Whether it’s genius or just good housekeeping, it shows even the giants can pivot when the AI race gets too hot.

tsunego 7 hours ago

No stargate?

  • rcleveng 7 hours ago

    Maybe stargate is the reason why MS doesn't need these leases?

  • ivalm 7 hours ago

    I don’t think they are a compute partner for stargate.

    • w0m 7 hours ago

      IIRC, MS has right-of-first-refusal on providing any new/necessaary compute for stargate.

      But also iirc; Stargate was well along before trumps horse/pony show also no idea how much is still necessary to purchase.

  • meindnoch 7 hours ago

    It's already done. Their AI is self-evolving now. No need for data centers anymore.

m00dy 12 hours ago

what happened to Jevons paradox then ?

packetlost 7 hours ago

Man, bubble popping already?

  • cgcrob 6 hours ago

    Yep. Investors are shifting capital out of the market. Always happens. The little guys end up paying for the losses.

josefritzishere 7 hours ago

Is the AI Bubble already popping?

  • elif 6 hours ago

    I doubt it. MS has what they need in oAI partnership. I think this is more likely just a reflection of the broader economic environment. Going into a recession, cut investment, try to retain as much talent as you can afford to for the next few years.

  • kittikitti 6 hours ago

    Do you think Big Tech will go down when it does or will it further consolidate and centralize power?

    • bluefirebrand 6 hours ago

      Given how things go when past bubbles have popped, this is likely to be "both" I think. Just not all at once

      When the bubble pops you see things collapse

      Then it becomes a feeding frenzy as companies and IPs get bought up on the cheap by whoever has a bit of money left

      When the dust clears, some old players are gone, some are still around but weaker, some new players have emerged that resemble conglomerates of the old players, but overall a lot of the previous existing power is consolidated into fewer hands