Frieren 4 months ago

News already corrected.

"Google Co-Scientist AI cracks superbug problem in two days! — because it had been fed the team’s previous paper with the answer in it" https://news.ycombinator.com/item?id=43162582#43163722

Let's see how many points gets the correction. It would be good that achieved the same or more visibility than this one to keep HN informative and truthful.

  • ipsum2 4 months ago

    That's not exactly right. The answer was not explicitly given, according to the source:

    "However, the team did publish a paper in 2023 – which was fed to the system – about how this family of mobile genetic elements “steals bacteriophage tails to spread in nature”. At the time, the researchers thought the elements were limited to acquiring tails from phages infecting the same cell. Only later did they discover the elements can pick up tails floating around outside cells, too.

    So one explanation for how the AI co-scientist came up with the right answer is that it missed the apparent limitation that stopped the humans getting it.

    What is clear is that it was fed everything it needed to find the answer, rather than coming up with an entirely new idea. “Everything was already published, but in different bits,” says Penadés. “The system was able to put everything together.”"

    https://www.newscientist.com/article/2469072-can-googles-new...

    • strangescript 4 months ago

      That is what 90% of science is really. There aren't a lot of truly "aha" moments where someone discovers something fundamental with little outside influence.

      • xbmcuser 4 months ago

        This is what I think AI as it gets better will be great at and result in some great discoveries. Today we have millions of research papers on millions of topics the amount of knowledge is impossible for any one human or a group of humans to know so requires serendipity for some discoveries.

      • Mr_Bees69 4 months ago

        A person suffering hallucinations are the only true creatives, the rest is inadvertent or intentional synthesis.

    • slashdev 4 months ago

      Sometimes science isn’t doing completely novel things, but combining ideas across different disciplines or areas.

      AI has some potential here, because unlike a human, AI can be trained across all of it and has the opportunity to make connections a human, with more limited scope, might miss.

      • Borealid 4 months ago

        What matters isn't that an AI "make connections", it's that the AI generates some text that causes a human to make the connections. It doesn't even matter if what the AI generates is true or not, if it leads the human to truth.

        In this particular example it wasn't useful because the reader already knew the answer and was fishing for it with the LLM. But generally using an LLM as a creativity-inducer (a brainstorming tool) is fine, and IMO a better idea than trying to use them as an oracle.

        • suzzer99 4 months ago

          > What matters isn't that an AI "make connections", it's that the AI generates some text that causes a human to make the connections.

          This has been true since Waze and Google maps first came out. Some of the suggestions were like "Huh, I never thought of going that way... hey this works... good job Waze!" and some were like "No Waze, I'm not going to try to cross eight lanes of free-flowing traffic at rush hour."

          What sucks is when an AI bot hallucinates some transgression and bans you from a monopoly marketplace for life, with no human recourse. Ask me how I know.

          • s1gsegv 4 months ago

            > What sucks is when an AI bot hallucinates some transgression and bans you from a monopoly marketplace for life, with no human recourse. Ask me how I know.

            This is fundamentally an incentive problem. Whether MegaCorp reaches this decision through an AI or a team of people who made a mistake, interpreted a rule differently than intended, missed context, whatever, they have been allowed to set up the system in a way which has all the incentives in their favor. They don’t need you and further, helping you costs them money. Yet we allow them to be the only game in town and still say it’s at their convenience to give you service.

            The focus in the modern world needs to be on re-incentivizing companies to do the right things, AI or otherwise.

            • suzzer99 4 months ago

              I think we need some kind of legislation that any kind of death penalty from a monopoly marketplace can't be carried out without a face-to-face appeal with a live human being.

              • s1gsegv 4 months ago

                That’d certainly be a good step, but I think it still only incentives them to hear you out. Since in this case you would have demonstrated to them that you were a customer that risks costing them money (in the form of this appeal if nothing else) they’re still incentivized to not do business with you if given the option.

                Even in the case of something like a credit card company where it’s not technically a monopoly, you risk them all coming to the same conclusion.

                Essential for-profit service combined with at-will service is simply a recipe for failure.

                • suzzer99 4 months ago

                  I wasn't a customer. I wrote a book and tried to publish it on KDP, Amazon's digital publishing wing. A bot hallucinated that my eBook was plagiarizing my paperback (well yeah, it's the same book) and suspended my account.

                  All appeals were denied with increasingly vague language, either by more bots, or humans with strong incentive to just rubber stamp the bot's decision and move on. I'll never know which.

                  It's all here in this pamphlet: https://news.ycombinator.com/item?id=40992654

        • bmacho 4 months ago

          > It doesn't even matter if what the AI generates is true or not, if it leads the human to truth.

          A weird statement. I think you only care about humans knowing the thing, right? Then it can be true that it doesn't matter if the machine is right or wrong.

          However, if the machine is right.. that will have consequences.

        • NoPicklez 4 months ago

          I think in your case both things matter, if the AI gives you the answer and/or if the AI leads you to the answer yourself.

      • dmix 4 months ago

        > Sometimes science isn’t doing completely novel things, but combining ideas across different disciplines or areas.

        That's basically 99% of startups/business too.

        • hackit2 4 months ago

          That is because all of the leading "science" is being done by private pharmaceutical companies, and businesses that have strict NDA's and security.

      • godelski 4 months ago

          > to make connections a human, with more limited scope, might miss.
        
        I have a concern, that all this AI stuff is giving less time to "fuck around and find out" (i.e. experimentation). Just like how shifting from compiled languages leads to less time thinking about the code or reading docs while things are compiling. Sometimes you just need to walk away from the desk to solve a problem. It's kinda ironic, that reducing blocking aspects creates more. But the reason walking away from the desk works is because your creative part needs time to think and imagine. It's why you play with your lab equipment or make fun programs to learn. But if everything is focused on only going to the product then you end up straying away from that.
        • jdiez17 4 months ago

          In my experience with AI for research and programming I find the opposite. It lets me get to the FAFO stage sooner, so I can try many possible paths (many of which may be dead ends).

          • godelski 4 months ago

            That's not going to help if there's anything subtle. Yeah there's lots of failure in research but many things fail because subtle issues and AI is terrible at doing anything subtle.

            I do use AI myself but tbh I'm constantly fighting it and find it frequently misses small subtle things that are important. I even notice that people accept answers that I wouldn't, and that's sometimes concerning

      • Mr_Bees69 4 months ago

        >Sometimes

        Usually.

        The media portrays science in an unhelpful manner, gpt-3.5 didnt appear out of thin air, deekseek R1 was built on deepseek MOE was built on research by mistral

      • math_dandy 4 months ago

        I suspect your first sentence characterizes the vast majority of scientific activity.

    • letitgo12345 4 months ago

      Or the humans did think of it and were actively proceeding to test that hypothesis

    • dartos 4 months ago

      Every day it becomes clearer and clearer that LLMs are, mainly, a huge step forward in search algorithms.

    • guelo 4 months ago

      I like to think of LLMs as amazing semantic search engines. Making these types of missed connections in vast lakes of existing data is not something humans are good at.

      • d1sxeyes 4 months ago

        That’s absolutely not true, humans are very good at finding connections in huge datasets.

        LLMs also happen to be pretty good at it, but unlike humans they don’t get bored or tired from doing it too much.

  • basisword 4 months ago

    When I first read this a few days ago the scientists explicitly stated that they hadn't published their results yet. Have they changed their story? From the BBC article on it:

    "He told the BBC of his shock when he found what it had done, given his research was not published so could not have been found by the AI system in the public domain."

    Also:

    Prof Penadés' said the tool had in fact done more than successfully replicating his research. "It's not just that the top hypothesis they provide was the right one," he said. "It's that they provide another four, and all of them made sense. "And for one of them, we never thought about it, and we're now working on that."

    • kedean 4 months ago

      They hadn't published the latest paper, but the many papers leading up to this were published, and so its able to work off of those. I'm not an expert in this field, but in this case, it seems they had already published a paper that listed this idea as an option and dismissed it, so in all likelihood Co-Scientist simply ignored the theoretical limitation.

      Either way, the headline is garbage. It's like being amazed that your coworker who you've documented every step in your process to managed to solve that problem before you. "I've been working on it for months and they solved it in a day!" is obviously false, they have all of the information and conclusions you've been producing while you worked on it. In this case, telling the coworker every detail is the non-consensual AI training process.

      • pockmarked19 4 months ago

        Yes, but it’s the same level of garbage as “Facebook was built in a week”, where the hidden part is “after The Harvard Connection spent two years iterating on the product and got everything stolen”.

        Society and people in general don’t want to hear these “sour grape” gripes. Unless they are the ones affected adversely.

      • basisword 4 months ago

        Interesting. In that case I wouldn't put too much blame on the headline - sounds like the scientist was pretty disingenuous when telling the story initially.

    • knowitnone 4 months ago

      "could not have been found by the AI system in the public domain." could he have been working in the cloud that exposed his unpublished paper to AI?

  • bawolff 4 months ago

    The moment i read the headline i kind of suspected that was the case (either implicitly or explicitly).

    The exciting bit will be when AI does something nobody has done before. Then we know for sure it isn't cheating.

    • wizzwizz4 4 months ago

      That milestone was achieved in the… 60s, I want to say? Definitely some time in the 20th century.

  • j45 4 months ago

    I get if LLMs don't read correctly, but reporters?

  • h1fra 4 months ago

    /s Breaking the last person to put the last brick of the house has built a house in just a few minutes

  • bilekas 4 months ago

    It's clear after you read the article but that title is really typical these days. It's frustrating as heck but I will admit, it made me click on the link to see what it was all about. Still super impressed all the same.

  • rybthrow2 4 months ago

    What about the other two use cases it came up repurposing existing drugs and identifying novel treatment targets for liver fibrosis as per the paper?

    https://research.google/blog/accelerating-scientific-breakth...

    • colingauvin 4 months ago

      This article claims only 3 of the new materials were actually snythesizable (and all had been previously known of), and that the drug for liver fibrosis had already been investigated for liver fibrosis.

      https://pivot-to-ai.com/2025/02/22/google-co-scientist-ai-cr...

      • rybthrow2 4 months ago

        The snythesizable materials is another paper by deepmind not relying on LLMs and unrelated to this one. The article's author briefly mentions one of the other two findings without providing any sources to support the claim that they aren’t novel/useful?

        If it helps scientists find answers faster, I don’t see the problem—especially when the alternative is sifting through Google or endless research papers.

furyofantares 4 months ago

It says it took them a decade, but they obviously published loads of intermediate results, as did anyone else working in the space.

I get that they asked it about a new result they hadn't published yet, but the idea that it did it in two days when it took them a decade -- even though it's been trained on everything published in that decade, including whatever intermediate results they published -- probably makes this claim just as absurd as it sounds.

  • tippytippytango 4 months ago

    I’d bet the hypothesis was already in the training data. Someone probably suggested it in the future work section of some other paper.

    • hinkley 4 months ago

      One of the annoying things about the LZW compression patent was that it covered something Lempel and Ziv had already mentioned in the further study section of their original paper. Someone patented an “exercise left to the reader”.

  • vintagedave 4 months ago

    Maybe, but:

    > He told the BBC of his shock when he found what it had done, given his research was not published so could not have been found by the AI system in the public domain.

    and,

    > Critically, this hypothesis was unique to the research team and had not been published anywhere else. Nobody in the team had shared their findings.

    • furyofantares 4 months ago

      Right, the actual result they asked about was not published. But do a search for the author, they indeed have published loads of related stuff in the last decade even if not that actual result and even if not directly pointing at it.

    • jgalt212 4 months ago

      I guess, but pretty much every LLM is trained with data outside the public domain--whether they admit to it, or not.

      • mossTechnician 4 months ago

        Since a Google product output the result, it had access to the same knowledge base as Google itself - Search, Scholar, etc. And use of copyrighted data for research purposes has always been, as far as I know, considered fair use in AI training.

        Unfortunately, even if we wanted to, it might be impossible to attribute this success to the authors that it built upon.

        • monkeydreams 4 months ago

          > considered fair use in AI training

          By AI trainers, if not by the authors whose works were encoded.

      • fsckboy 4 months ago

        >outside the public domain

        public domain is a legal term meaning something like "free from copyright or license restrictions"

        I believe in this thread people mean not that but "published and likely part of the training corpus"

  • CivBase 4 months ago

    "New hammer builds house in hours that took construction workers months." Reads the same to me. They introduced a new tool to help with a task that sounds like it was already near completion.

    It is still a big accomplishment if the AI helped them solve the problem faster than they could have without it, but the headline makes it sound like it solved the problem completely from scratch.

    This kind of hyperbole is what makes me continue to be skeptical of AI. The technology is unquestionably impressive and I think it's going to play a big role in technology and advancement moving forward. But every breakthrough comes with such a mountain of fluff that it's impossible to sort the mundane from the extraordinary and it all ends up feeling like a marketing-driven bubble waiting to burst.

  • gota 4 months ago

    Tentative rephrasing: "When given all of the facts we uncovered over a decade as premises, the computer system generated the conclusion instantly"

    edit - instantly is apparently many hours, hence the 'two days', just to be clear

  • melagonster 4 months ago

    The professor does not publish it yet because they need to do experiments for testing hypotheses.

root_axis 4 months ago

> Critically, this hypothesis was unique to the research team and had not been published anywhere else. Nobody in the team had shared their findings

This seems like the most important detail, but it also seems impossible to verify if this was actually the case. What are the chances that this AI spat out a totally unique hypothesis that has absolutely no corollaries in the training data, that also happens to be the pet hypothesis of this particular research team?

I'm open to being convinced, but I'm skeptical.

  • card_zero 4 months ago

    I wonder about a "clever Hans" effect, where they unwittingly suggest their discovery in their prompt. Also whether they got paid.

    • cwillu 4 months ago

         "It's not just that the top hypothesis they provide was the right one," he said.
         "It's that they provide another four, and all of them made sense.
         And for one of them, we never thought about it, and we're now working on that."”
      • card_zero 4 months ago

        I wonder whether that one is really any good.

    • graeme 4 months ago

      May well be but in this case it would be a two way clever Hans which is very promising.

  • mtrovo 4 months ago

    Define "totally unique hypothesis" in this context. If the training data contains studies with paths like A -> B and C -> D -> E, and the AI independently generates a proof linking B -> C, effectively creating a path from A -> E, is that original enough? At some point, I think we're going to run out of definitions for what makes human intelligence unique.

    > It also seems impossible to verify if this was actually the case.

    If this is a thinking model, you could always debug the raw output of the model's internal reasoning when it was generating an answer. If the agent took 48 hours to respond and we had no idea what it was doing that whole time, that would be the real surprise to me, especially since Google is only releasing this in a closed beta for now.

  • TrackerFF 4 months ago

    Could some of the scientists have saved their data in the google cloud, say using google drive? And then some internal google crawler went through, and indexed those files?

    I don't know that their policy says about that, or if it is even something they do...at least not publicly.

  • bArray 4 months ago

    Google openly train stuff based on your email, they used is specifically to train Smart Compose, but maybe other stuff too. He likely uses multiple Google products. Draft papers in Google Drive perhaps?

    These LLM models are essentially trying to produce material that sounds correct, perhaps the hypothesis was a relatively obvious question with the right domain knowledge.

    Additionally, he may not have been the first to ask the question. It's entirely possible that the AI chewed up and spat out some domain knowledge from a foreign research group outside of his wheelhouse. This kind of stuff happens all the time.

    I personally have accidentally reinvented things without prior knowledge of them. Many years ago in University I remember deriving a PID controller without being aware of what one was. I probably got enough clues from other people/media that were aware of them, that bridging that final gap was made easier.

    • Jabbles 4 months ago

      > We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission.

      https://support.google.com/meet/answer/14615114?hl=en#:~:tex...

      You may not believe them, but I challenge your description of it as "openly".

      • collingreen 4 months ago

        What constitutes permission? Did we all give full permission (in Google's opinion) without knowing it in the dozens and dozens of pages of Eula and deluge of privacy policy changes? The other comments are valid but irrelevant if Google thinks (rightly or wrongly) that "by using this product you give permission to xyz".

        I'm reminded of when Microsoft said you didn't actually buy the Xbox even though you thought you did and they won in court to prevent people from changing or even repairing their own (well, Microsoft's I guess, even though the person paid for it and thought they bought it) machine.

        • bArray 4 months ago

          Exactly this. In the pages and pages of terms and conditions, this will likely be covered, plus a lot of these services are opt-out. By not unticking "train on my workspace data" hidden several matrix layers deep, you are probably giving permission by default.

      • protimewaster 4 months ago

        Isn't that implying that they do train on Workspace data, but the results of the training won't be applied outside of the Workspace?

        It is an awful sentence, but I'm reading it as:

        > We do not use your Workspace data to train or improve the underlying generative AI and large language models [...] outside of Workspace [...].

        Which to me makes it sound like if the answer is in your Workspace, and you ask an LLM in your Workspace, it would be able to tell you the answer.

      • stonogo 4 months ago

        Yeah, "Workspace data." If you don't think a scientist has copies of all his/her stuff in a personal Google account you've never met a scientist.

        • cwillu 4 months ago

          The second half of the sentence is a bit awkward, but the first bit is pretty clear: “We do not use your Workspace data”

          • progval 4 months ago

            But a personal account is not Workspace, as Google Workspace is a B2B product, right?

          • Xelynega 4 months ago

            Does it not say "without permission"?

            Was signing up for a workspace account and signing a EULA permission? Is permission given in an implicitly signed document like a privacy policy?

          • protimewaster 4 months ago

            Isn't it saying that it might be trained on Workspace data, but that the results of that training will only be used within the same Workspace?

            • cwillu 4 months ago

              It's not being used to train the underlying generative AI.

              • protimewaster 4 months ago

                Doesn't it only say it won't be trained for generative AI that works outside of the Workspace?

                > We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace

                I.e., they could use it to train an LLM specific to your Workspace, but that training data wouldn't be used outside of the Workspace, for the general generative products.

    • crazygringo 4 months ago

      > Google openly train stuff based on your email

      They do not.

      Many years ago, they served customized ads based on your email. Then they stopped that, even for free accounts, because it led to a lot of unfounded misunderstanding that "Google reads your email, Google trains on your email"...

      • jsiepkes 4 months ago

        Can you prove they don't? Right now Google is losing in the AI space. I wouldn't be surprised if they went with "it's better to ask for forgiveness later than permission upfront".

        • crazygringo 4 months ago

          Look at all their legal cost contracts like terms of service.

          Then also consider how all of their enterprise customers would switch to MS/AWS in a heartbeat if they found out Google was training on their private, proprietary data.

          Google Cloud has been a gigantic investment they've been making for well over a decade now. They're not going to throw away consumer and enterprise trust to train on a bunch of e-mail.

          • mschuster91 4 months ago

            > Then also consider how all of their enterprise customers would switch to MS/AWS in a heartbeat if they found out Google was training on their private, proprietary data.

            Enterprise customers are one thing, but private customers of ordinary Gmail? Completely another thing.

            > Google Cloud has been a gigantic investment they've been making for well over a decade now. They're not going to throw away consumer and enterprise trust to train on a bunch of e-mail.

            Consumers barely trust Google any more these days, not with the frequent stories about arbitrary bans, and corporations shy away from Google Cloud due to the same reason plus the service quality being way lower than AWS.

      • Lastminutepanic 4 months ago

        I mean they never stopped "training on your email"... I'm sure they started of (back when they may not have had the code/compute to trawl and model from the body of the email), it was just the metadata they used to help build data for targeted marketing. And are still absolutely using metadata...

        But yeah, in this specific case, it is way less nefarious. Just one, or both, of Google, and the scientist, selling a new AI product, with a sensationalist, unrealistic story, in a huge, publicly funded, "serious" news outlet. At least when the NYtimes drools down it's chin at some AI vaporware, they may be getting a huge advertising buy, or someone there owns a lot of stock in the company... BBC can't even hide behind "well that's capitalism baby".

        I will say the prime minister and his red thatcherites have been obsessed with becoming a player in the AI industry... If you want a conspiracy theory i think is more likely haha.

  • miyuru 4 months ago

    > "I wrote an email to Google to say, 'you have access to my computer, is that right?'", he added.

    sounds extra fishy, since google does not provide email support normally.

    • bArray 4 months ago

      Exactly my thought, probably the least likely part of the whole thing. He emailed Google and they replied. Not only that, he asked a question they would really rather not answer.

nurumaik 4 months ago

>He gave "co-scientist" - a tool made by Google - a short prompt asking it about the core problem he had been investigating and it reached the same conclusion in 48 hours.

Could it be the case when asking the right question is the key? When you know the solution already it's actually very easy to accidentally include some hints in your phrasing of question that will make task 10x easier

  • killerteddybear 4 months ago

    This is a very common mistake with LLMs I find. Lots of people who have high domain knowledge will be very impressed by it due to situations where they phrase questions in such a way that it unintentionally leads it to a specific answer which they see as rightfully impressive, not realizing the information which they encoded in the question.

monkeydust 4 months ago

Using this, launched yesterday:

Today Google is launching an AI co-scientist, a new AI system built on Gemini 2.0 designed to aid scientists in creating novel hypotheses and research plans. Researchers can specify a research goal — for example, to better understand the spread of a disease-causing microbe — using natural language, and the AI co-scientist will propose testable hypotheses, along with a summary of relevant published literature and a possible experimental approach.

https://blog.google/feed/google-research-ai-co-scientist/

fatbird 4 months ago

So the AI didn't prove anything, it offered a hypothesis that wasn't in the published literature, which happened to match what they'd spent years trying to verify. I can see how that would look impressive, and he says that if he'd had this hypothesis to start with, it would have saved those years.

Without those years spent working the problem, would he have recognized that hypothesis as a valuable road to go down? And wouldn't the years of verifying it still remain?

Lastminutepanic 4 months ago

Here is the scientists Google scholar account. He has been publishing (and so have lots of other scientists, just look at the papers HE cites in previous work) about this exact scenario for years. This is just Google announcing a brand new AI tool (the scientist tool was literally just released in the past few days). And knew mainstream outlets have 1. No clue about how AI works, and 2. A pathological deference to anyone with lots of letters after their name from impressive places.

https://scholar.google.com/citations?hl=en&user=rXUHiP8AAAAJ...

patcon 4 months ago

A thought occurred to me, as someone involved in some projects trying to recalibrate invectives for science funding: Good grad students are usually more intersectional across fields (compared to supervisors) and just more receptive to outsider ideas. They unofficially provide a lot of this same value that AI is about to provide established researchers.

I wonder how AI is going to mess with the calculus of employing grad studies, and if this will affect the pipeline of future senior researchers...

programmertote 4 months ago

When I read "cracks superbug problem", I thought AI solved how to kill superbugs. From reading the article, it seems like AI suggested a few hypotheses and one of which is similar to what the researcher thought of. So in a way, it hasn't cracked the problem although it helped in forming ONE of the hypotheses, which needs to be tested in experiments(?)

Just want to make sure I'm understanding what's written in the article accurately.

  • newsreaderguy 4 months ago

    it suggested other hypotheses as well, including one that they hadn't thought of and are investigating

didntknowyou 4 months ago

why did it take 48 hours? did he give the AI data to process or was it just a prompt. did it spit back out a conclusion or a list of possible scenarios it had scraped? seems like a PR stunt.

  • tecleandor 4 months ago

    While there is some stuff around this that sounds like PR (probably some intermediate results and/or SOTA on that field as of today could help reaching that result), the process seems interesting. Seems like it launches ahem "agents" that do simulations, verifications, refines and iterates over different options... and it takes a while:

    https://research.google/blog/accelerating-scientific-breakth...

gptacek 4 months ago

"AI" didn't "crack" anything here. An LLM generated text that's notionally similar to a hypothesis this researcher was interested in but hadn't published. You can read Dr. Penades in his own words on BioRxiv, and if you might have been interested in reading the prompt or the output generated by co-scientist, it's included in the results and SI: https://www.biorxiv.org/content/10.1101/2025.02.19.639094v1

What actually happened here looks more like rubber-ducking. If you look at the prompt (Supplementary Information 1), the authors provide the LLM with a carefully posed question and all the context it needed to connect the dots to generate the hypothesis. The output (Supplementary Information 2) even states outright what information in the prompt led it to the conclusion:

"Many of the hypotheses you listed in your prompt point precisely to this direction. These include, but are not limited to, the adaptable tail-docking hypothesis, proximal tail recognition, universal docking, modular tail adaptation, tail-tunneling complex, promiscuous tail hypothesis, and many more. They collectively underscore the importance of investigating capsid-tail interactions and provide a variety of testable predictions. In addition, our own preliminary data indicate that cf-PICI capsids can indeed interact with tails from multiple phage types, providing further impetus for this research direction."

steeeeeve 4 months ago

I find this odd because that's exactly how I thought viruses worked when crossing species and I have no background that would lead me to that conclusion and have almost nothing in my life that would make me ponder such a thing.

I feel like someone explained this in the 80s to me.

kachapopopow 4 months ago

this might be a dupe and the title is completely misleading, it (the AI) simply provided one of the hypothesis (which took two years to confirm) as the top result. A group of humans can come up with these in seconds given expertise in the subject.

dotdi 4 months ago

Maybe this is too tin-hatty, but sending the unpublished manuscript from (or to) a free Gmail account gives Google the right to use it, and therefore also to train whatever AI model they want with it.

Sounds like all of those claims where ChatGPT allegedly coded a flappy bird clone from scratch. Only it didn't, it just regurgitated code from several Github repos.

aqueueaqueue 4 months ago

Hey guys, I cracked the general theory of relativity in 30 seconds. Just got to find the right download!

marcus_holmes 4 months ago

But they needed to have already solved the problem in order to be able to verify that the LLM hadn't just hallucinated a solution.

(and I wonder how many hallucinated solutions the LLM came up and were rejected - sorry "refined" - by the team).

skgough 4 months ago

I'm getting the impression that this worked becaused the LLM had hoovered up all the previous research on this topic and found a reasonable string of words that could be a hypothesis based on what it found?

I think we are starting to get to the root of the utility of LLMs as a technology. They are the next generation of search engines.

But it makes me wonder, if we had thrown more resources towards using "traditional" search techniques on scientific papers, if we could have gotten here without gigawatts of GPU work spent on it, and a few years earlier?

camkego 4 months ago

I’d like to know how we got an email back from Google confirming that they don’t have access to his computer

cbm-vic-20 4 months ago

There is a real societal danger in ascribing abilities to "AI" that it just doesn't have.

nurettin 4 months ago

"LLMs are great at solving problems that already have solutions."

This is performance art, right?

est 4 months ago

can't wait AI to discover tons of Ramanujan style math inventions.

Lastminutepanic 4 months ago

Lmao. Not at all surprised the BBC didn't even bother to look at this guy's Google scholar account. He has been publishing papers about this exact scenario for years. So have many other scientists.

A few years back I was so sick of blockchain vaporware, and honestly couldn't think of anything more annoying... But several years of reading "serious" outlets publish stuff like "AI proves the existence of God", or "AI solves cold fusion in 15 minutes, running on a canon R6 camera" makes me wish for the carefree days of idiots saying "So you've heard of Uber, now imagine Uber, but it's on the blockchain, costs 0.1ETH just to book a ride, and your home address is publicly accessible"...