ALT + + Schriftgröße anpassen
ALT + / Kontrast anpassen
ALT + M Hauptnavigation
ALT + Y Socials
ALT + W Studiengang wählen
ALT + K Homenavigation
ALT + G Bildwechsel
ALT + S Übersicht
ALT + P Funktionsleiste
ALT + O Suche
ALT + N Linke Navigation
ALT + C Inhalt
ALT + Q Quicklinks
ESC Alles zurücksetzen
X
A - keyboard accessible X
A
T

Professor

Cristóbal Curio
Cristóbal Curio, Prof. Dr.-Ing.

Building 9
Room 227

Phone +49 7121 271 4005

Send e-mail »

Cristóbal Curio
Cristóbal Curio, Prof. Dr.-Ing.

Building 9 , Room 227

Phone +49 7121 271 4005

Close

Since November 2014, I have been a Full Professor for Cognitive Systems at the Department of Informatics at Reutlingen University. I am also associated with the Department of Informatics at the University of Tübingen and serve as a guest scientist at the Max Planck Institute for Intelligent Systems, where I previously led the Applied Cognitive Engineering group at the Max Planck Institute für Biological Cybernetics until 2013.

In addition to my academic roles, I have led industry projects focused on autonomous driving and human-machine interaction design.

At the Cognitive Systems Research Group, we develop and apply technologies in computer vision, computer graphics, virtual and augmented reality, and machine learning. By integrating these with principles of applied human perception, we design, evaluate, and optimize the interfaces between emerging technologies and their users.

My overarching research goal is to synergize human and machine intelligence, advancing the field of Human-Centered Artificial Intelligence.

At the heart of our research is a central question:
How can we design robust components for truly human-centered computing systems?

We pursue this by bridging empirical experimentation and computational modeling—valuing both equally and combining them to develop intelligent, assistive technologies. Our work thrives at the intersection of disciplines, leveraging new insights from perception science, machine learning, and human-computer interaction.

The goal: to create AI systems that not only understand but also anticipate human needs—augmenting human abilities, improving usability, and promoting trust.

Approach

Our approach integrates:

  • AI & machine learning to model, predict, and adapt to human behavior

  • Human perception research to understand cognitive and sensory processes

  • Interactive technologies (e.g., AR/VR, vision & graphics systems) to prototype and evaluate assistive components

This interdisciplinary framework allows us to develop both:

  • New technical solutions (for human-AI interfaces and assistive systems)

  • Scientific tools that open up further research directions

We focus on three key research interfaces:

1 Human Perception & Computer Vision

We explore how machines can better perceive and interpret the world by learning from human perception. Our work bridges cognitive science and computer vision to design AI systems that understand complex scenes, anticipate human actions, and support real-time decision-making.

Key topics include:

  • Human-inspired models of attention and situation awareness

  • Perceptual metrics for evaluating visual systems in safety-critical contexts

  • Neurorobotics and assistive technologies that enhance motor function and autonomy

  • Human-machine collaboration frameworks powered by shared perception

  • Semantic scene understanding with deep neural architectures

  • Fusion of human and machine intelligence for robust, context-aware behavior

By aligning perception-driven insights with computational models, we aim to create vision systems that are not only intelligent but also transparent, adaptive, and grounded in how people actually see and act.

"Griff-Technik für die gelähmte Hand" – Bild der Wissenschaft "Jetzt ist morgen" – Regional innovation feature on digital futures [www]

2 Interfacing Human Perception & Computer Graphics

In this research stream, we explore how perceptual principles and advanced graphics technologies can drive more inclusive, adaptive, and explainable human-AI interfaces. We focus on creating digital human representations that enable personalized and transparent interaction across age, ability, and context.

Our current work includes:

  • High-fidelity 3D scanning of faces and bodies for realistic digital twins and avatars

  • Interactive animation systems that respond to perceptual cues and emotional states

  • Personalized, multimodal feedback for enhanced accessibility and sensory augmentation

  • Dynamic body perception modeling to inform real-time social interaction with virtual agents

  • Explainable avatar behavior to improve trust and usability in AI-driven interfaces

3 Interfacing Computer Graphics and Computer Vision

At this interface, we develop intelligent systems that can perceive, simulate, and interact with complex environments. By combining computer graphics and computer vision, we build powerful tools for prototyping, training, and testing next-generation AI systems in realistic, data-rich virtual settings.

Our research covers:

This work not only accelerates AI development but also opens the door to more explainable, robust, and adaptive systems that can learn safely and effectively from both real and virtual worlds.

Recent mentions in the public

Unterwegs in die Zukunft, Autonomes Fahren [Schwäbisches Tagblatt, in German]

Das Auto erkennt Gesten und Grimassen [Re:search Magazin , p. 9, in German]

Handshake mit dem Avatar [Camplus Magazin, 2019 (Reutlingen University), in German]

Selected publications
PEER REVIEWED JOURNAL ARTICELS                                                                                              

de la Rosa S., Fademrecht L., Bülthoff H.H., Giese M.A., Curio C. (2018) Two ways to facial expression recognition? Motor and visual information have different effects on facial expression recognition, Journal of Psychological Science. Volume: 29 issue: 8, page(s): 1257-1269.

Chiovetto E., Curio C., Endres D., Giese M. (2018) Perceptual integration of kinematic components in the recognition of emotional facial expressions, Journal of Vision; 18(4): p. 1-19. ISSN: 1534-7362.

Dobs, K., Bülthoff, I., Breidt, M., Vuong, Q., Curio, C., Schultz, J. (2014) Quantifying human sensitivity to spatio-temporal information in dynamic faces. Vision Research 100, pp. 78 – 87.

PEER REVIEWED CONFERENCE ARTICELS                                                                                  

Bramlage L, Karg M, Curio C: Plausible uncertainties for human pose regression.  In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV); 2023. pp. 15087-15096. DOI: 10.1109/ICCV51070.2023.01389

Burgermeister D, Curio C: PedRecNet: Multi-task deep neural network for full 3D human pose and orientation estimation. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV); 2022. pp. 441-448. DOI: 10.1109/IV51971.2022.9827202.

Essich M, Rehmann M, Curio C:  Auxiliary Task-Guided CycleGAN for Black-Box Model Domain Adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV); 2023. pp. 541-550. DOI: 10.1109/WACV56688.2023.00061.

Ludl D., Gulde T., Curio C. (2019) Simple yet efficient real-time pose-based action recognition, 22nd IEEE International Conference on Intelligent Transportation Systems (ITSC), October 27-30.

Gulde T., Ludl D., Andrejtschik J., Thalji S., Curio C. (2019) RoPose-Real: Real World Dataset Acquisition for Data-Driven Industrial Robot Arm Pose Estimation, IEEE International Conference on Robotics and Automation (ICRA 2019), May 20-24, Montreal, pp 1-8.

Ludl D., Gulde T., Thalji S., Curio C. (2018) Using simulation to improve human pose estimation for corner cases, 21st IEEE International Conference on Intelligent Transportation Systems (ITSC), pp. 3575-3582. (Runner-Up Best Paper Award)

Baulig G., Gulde T., Curio C. (2019) Adapting Egocentric Visual Hand Pose Estimation Towards a Robot-Controlled Exoskeleton. In: Leal-Taixé L., Roth S. (eds) European Conference on Computer Vision, 2018 Workshops. ECCV 2018. Lecture Notes in Computer Science, vol 11134. Springer

Gulde T., Ludl D., Curio C. (2018) RoPose: CNN-Based 2D Pose Estimation of Industrial Robots, 14th IEEE Conference on Automation Science and Engineering (CASE), Munich, August, pp. 463-470.

Gulde T., Kärcher S., Curio C (2016), Vision-Based SLAM Navigation for Vibro-Tactile Human-Centered Indoor Guidance. In: Hua G., Jégou H. (eds) Computer Vision – ECCV 2016 Workshops. ECCV 2016, Lecture Notes in Computer Science, vol 9914.

Schuster F., Zhang W., Keller C.G., Haueis M., Curio C. (2017) Joint graph optimization towards crowd based mapping, IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), 2017, pp. 1-6.

Breidt M., Bülthoff H.H., Curio C. (2016) Accurate 3D head pose estimation under real-world driving conditions: A pilot study, 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), 2016, pp. 1261-1268.