Max-Planck-Institut für Intelligente Systeme
Unser Ziel ist es, die Prinzipien von Wahrnehmen, Lernen und Handeln in autonomen Systemen zu verstehen, die mit komplexen Umgebungen interagieren. Das Verständnis wollen wir nutzen, um künstliche intelligente Systeme zu entwickeln. Die Wissenschaftlerinnen und Wissenschaftler am Max-Planck-Institut für Intelligente Systeme erforschen diese Prinzipien in biologischen, hybriden und Computer-Systemen sowie in Materialien. Das Spektrum reicht dabei vom Nano- bis zum Makrobereich. Mit unserer stark interdisziplinären Herangehensweise kombinieren wir mathematische Modelle, Computer- und Materialwissenschaft sowie Biologie miteinander.
"Die Fähigkeit, Systeme für autonome Robotik und intelligente Software zu entwickeln, ist eine künftige Schlüsseltechnologie für Industrie, Transport und Logistik als auch für unsere Gesellschaft als Ganzes....Intelligente Systeme in der Natur - inklusive der Mensch - haben durch Interaktion, Evolution und Lernen ausgeklügelte Fähigkeiten entwickelt, erfolgreich in der Welt zu bestehen. Jedoch ist unser Verständnis dieser Phänomene noch sehr beschränkt, und die Synthese von intelligenten, autonomen und lernenden Systeme bleibt eine grosse wissenschaftliche Herausforderung. "
Max Planck Institute für Intelligente Systeme
https://www.is.mpg.de/de/overview
“Die Wissenschaftler des Instituts untersuchen die Organisationsprinzipien von intelligenten Systemen und wollen diese sowie den zugrunde liegenden Kreislauf von Warhnehmen – Handeln – Lernen verstehen. “
Abteilungen in Tübingen: Maschinelles Lernen, Maschinelles Sehen, Robotik, Regelung und Steuerung sowie die Theorie von intelligenten Systemen.
(1) Empirische Inferenz - Tübingen Standort
“Das primäre Ziel der Forscher*innen ist, zu verstehen, wie Lebewesen und künstliche Systeme Strukturen erkennen, um damit in der Welt zu agieren.” “Our department is conducting theoretical, algorithmic, and experimental studies to try and understand the problem of empirical inference.”
(2) Perceiving Systems
“Die Abteilung „Perzeptive Systeme“ kombiniert Computer Vision, maschinelles Lernen und Computergrafik mit dem Ziel Computern beizubringen, Menschen und ihr Verhalten in Bildern und Videos zu verstehen. Der Ansatz der Abteilung ist einzigartig, da mathematische Modelle der menschlichen 3D-Form und Bewegung mittels maschinellem Lernen erstellt und mit vergleichsweise wenigen Parametern beschrieben werden. Diese Modelle werden herangezogen, um das Bewegungsverhalten von Menschen aus 3D-Szenen zu extrahieren und zu analysieren. Die Abteilung beschäftigt rund 45 Mitarbeiter*innen und Student*innen sowie weitere angegliederte Forscher*innen. Sie verfügt über spezielle 4D-Scanner, die mit 60 Bildern pro Sekunde hochpräzise und detaillierte 3D-Netze von Körpern, Gesichtern, Händen und Füßen erzeugen. Darüber hinaus werden auch tragbare Motion Capture-Systeme, Flugroboter und hochspezialisierte Kamerasysteme zur Aufzeichnung eingesetzt.”
We combine research on computer vision, computer graphics, and machine learning to teach computers to see and understand humans and their behavior. A key goal is to learn digital humans. This work combines Computer Vision, Machine Learning, and Computer Graphics.
(3) Autonomous Motion
The Autonomous Motion Department has its focus on research in intelligent systems that can move, perceive, and learn from experiences.
“We are interested in investigating such perception-action-learning loops in biological systems and robotic systems, which can range in scale from nano systems (cells, nano-robots) to macro systems (humans, and humanoid robots).”
(4) Autonomous Vision Group - young people /students!
We are interested in computer vision and machine learning with a focus on 3D scene understanding, parsing, reconstruction, material and motion estimation for autonomous intelligent systems such as self-driving cars or household robots. In particular, we investigate how complex prior knowledge can be incorporated into computer vision algorithms for making them robust to variations in our complex 3D world.
(5) Autonomous Learning Group
We are interested in autonomous learning, that is how an embodied agent can determine what to learn, how to learn, and how to judge the learning success. In particular, we focus on learning to control a robotic body in a developmental fashion. Artificial intrinsic motivations are a central component that we develop using information theory and dynamical systems theory. We work on reinforcement learning, representation learning, and internal model learning.
"creating an ultra-realistic immersive world has given rise to a thriving offshoot that’s far removed from the glare of game arcades. For the last two decades, biologists have been using VR as a tool to reveal fundamental principles about the neuronal circuitry underpinning behaviour in animals. And in Konstanz, behavioural biologists are joining forces with computer scientists to push the limits of this technology to gain insights into decision-making in animal collectives that were previously inaccessible.
What we call “VR” is technically defined as an immersive environment where the sensory organs (such as visual or auditory) of the user are artificially stimulated to alter the perception of reality. While we usually think of people using VR, animals can also be placed in virtual environments. But in place of a headset, the animal’s whole body is within the space."
..an animal immersed in a realistic and dynamic, yet synthetic world,..
So why would you put animals into virtual worlds? To answer this, it’s important to understand the powerful technique of artificial stimulation for studying behaviour. Artificial stimuli can reliably elicit behaviours in animals during experiments, thereby providing a deeper understanding of the decision-making of animals. ..experimenters can tweak properties of the artificial stimuli and plan the timing of delivery to systematically test behaviours.
Video playback has a major limitation: it does not react to the animal viewing it. Yet in the real world, action and reaction are intricately linked. If a spider acts aggressively, the spider observing it should also respond. So for biologists to turn digital stimuli into something closer to reality, they had to link action and reaction.
The moment when animal experiments broke into true VR territory, in the early 2000s, was when technology was capable of simulating the real world in two important ways: the animal could see the world from an egocentric perspective and, crucially, that world reacted in real time.
Navigation in virtual mazes allowed scientists to pinpoint circuits that underlie cognition, learning, and memory.
In the FreemoVR system, individual animals are embedded in a photorealistic synthetic world in which they can interact with virtual organisms, or inspect and move around virtual obstacles, just as they do in the real world. Graphics are projected into the volume to create a virtual world in full 3D with depth cues. To ensure the illusion is preserved, the animal’s movement is tracked and the graphics are updated accordingly.
In his research programme in Konstanz, Couzin has applied this platform to study fish, locusts, and flies, and in doing so has begun to decipher the pathways of communication in animal collectives.
“Virtual reality offers a means of controlling causality,” says Couzin.
...“Combined with some cool projection systems you can really start fooling humans about reality.”
But of course, it’s not enough to design a realistic virtual world for us. The animals must perceive it to be real. Considering solely vision, many animal species show a range of properties that differ from our own. For instance, the human visual system merges a stream of images into a continuous percept when presented with a refresh rate of at least 30 images per second, whereas this happens at 200 images per second in insects.