Will AI replace science?

Pentagon wants to replace scientific research with AI systems

Darpa is looking for suggestions on how AI systems can collect data themselves, develop hypotheses, create and check models - and do so as quickly as possible

While some hope that the introduction of AI applications in many areas will cost jobs for academics as well, but will create new ones, others warn of the massive displacement of human work, the value of which continues to compete with AI systems in many areas will sink. Like other defense ministries, the Pentagon is committed to the rapid development and use of military AI systems. You see yourself in competition with China, but also with Russia.

In the technical race, Darpa, the research agency of the Pentagon, which was founded after the Sputnik shock in 1959 to secure the technical superiority of the USA through innovative projects, has hatched an interesting idea. It is intended to accelerate scientific innovation and could also serve to dispense with human scientists at the same time.

It is the first project in the new AI research program that aims to create "Third Wave" AI systems that overcome the limits of existing machine learning and rule-based AI systems. In general, the call for tenders is about submitting proposals on how scientific discoveries and their application can be automated.

When creating computer models of complex systems there would be problems in obtaining the necessary information from experts and incorporating it into them, which is why they would contain the limitations of the manufacturer's knowledge and assumptions: "In addition, it is rare that scientists and subject experts are also programming experts at the same time, therefore, the implemented models often do not obey software programmer best practices, making them subject to error and difficult to verify or improve upon as new information emerges. "

So errors come in due to the imperfection, subjectivity and lack of programming knowledge of scientists. The desired AI systems should therefore exclude scientists and create and maintain extensive models of complex systems themselves, and they should also be able to think about them "by working out and interpreting scientific findings and assumptions in the existing codes of the models, new data and information sources automatically recognize, extract useful information, integrate it into the expert models supported by machines and execute them in a robust manner ".

Ultimately, Darpa is about an AI system that searches for data and sources for itself, searches for useful information and creates new models. And then the system should ideally also create and check its own hypotheses in order to optimize the predictions. That is not exactly a small amount - and it should also be implemented very quickly. Darpa would like a final prototype to be submitted within 18 months.

Optimize and accelerate

But they want even more, because the AI ​​system should also provide explanations based on various expert inquiries and also be able to set up machine-generated hypotheses. The scientific AI system should serve, for example, to check scientific results, to observe "fragile economic, political, social and environmental systems in real time" or to advance the processing of natural language, machine learning or human-machine cooperation.

And all of this should of course solve "important questions of national security" faster and better on a scale of magnitude. One thinks, not unimportant in military planning, of predictions even with changed variables. Obviously, counterfactual analyzes are also seen as significant, i.e. the question of what would have happened if Y had not happened, but X. You want to calculate strategies, record risks and optimize decisions.

It remains unclear in the ideas where the scientific curiosity of the independently working AI systems should come from in order to create hypotheses that are then checked with models. And why shouldn't AI systems of the "third wave" also contain their own assumptions? After all, in this tender, too, AI systems are first programmed by people and machine learning is initiated. Even if first-order AI systems create second-order AI systems independently as their own products or children, the prerequisites remain and are inherited. In addition, the question is whether AI systems will actually be able to make a paradigm shift (Thomas Kuhn), i.e. to initiate a scientific revolution. But neither the Darpa nor the Pentagon are interested in such questions. It's all about being superior to your opponents and being faster. (Florian Rötzer)

Read comments (56 posts) https://heise.de/-4145928Reporting errorsPrinting Telepolis is a participant in the amazon.de affiliate program advertisement