We’re pioneering advancements in AI, machine learning, and creative technology.
Research
TGITM’s research spans AI, machine learning, and creative technology, producing innovative work that shapes future trends and methodologies. Our team regularly publishes in leading journals and conferences, contributing to the global dialogue on technological innovation. Beyond traditional research, our experts are deeply involved in academia, teaching at some of the world’s most prestigious universities. They have also founded multiple companies, developed bespoke tools, and contributed to a range of groundbreaking projects.
Here are a few highlights of our team’s achievements:
Selected Research
Automatic Programming of VST Sound Synthesizers Using Deep Networks and Other Techniques | Journal Article | Read → |
Write once run anywhere revisited: machine learning and audio tools in the browser with C++ and emscripten | Conference Proceedings | Read → |
Clustering of Gaze During Dynamic Scene Viewing is Predicted by Motion | Journal Article | Read → |
Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes | Journal Article | Read → |
Do the eyes really have it? Dynamic allocation of attention when viewing moving faces | Journal Article | Read → |
Time Domain Neural Audio Style Transfer | Conference Proceedings | Read → |
Watching the world go by: Attentional prioritization of social motion during dynamic scene viewing | Journal Article | Read → |
Do low-level visual features have a causal influence on gaze during dynamic scene viewing? | Abstract | Read → |
Mining Unlabeled Electronic Music Databases through 3D Interactive Visualization of Latent Component Relationships | Conference Proceedings | Read → |
Corpus-based visual synthesis: an approach for artistic stylization | Symposium | Read → |
Audiovisual Scene Synthesis | Ph.D. Thesis | Read → |
The debate on screen time: An empirical case study in infant-directed video | Book Chapter | Read → |
Auracle: how are salient cues situated in audiovisual content? | Journal Article | Read → |
Audiovisual Resynthesis in an Augmented Reality | Conference Proceedings | Read → |
Improving non-small cell lung cancer segmentation on a challenging dataset | Poster Presentation | Read → |
Fast, interactive, AI-assisted 3D lung tumour segmentation | Poster Presentation | Read → |
3D Modeling and Motion | Course (UCLA) | View → |
Cultural Automation with Machine Learning | Course (UCLA) | View → |
Cultural Appropriation with Machine Learning | Course (UCLA) | View → |
Audiovisual Interaction w/ Machine Learning | Course (CalArts) | View → |
Creative Applications of Deep Learning | Online Course (Kadenze Academy) | View → |
Workshops in Creative Coding – Mobile and Computer Vision | Course (Goldsmith’s) | View → |
Audiovisual Processing for iOS Devices | Workshop (V&A Museum) | View → |
Introduction to openFrameworks | Workshop (Goldsmith’s) | View → |
Workshops in Creative Coding: Computer Vision | Workshop (Goldsmith’s) | View → |
Center for Experimental Media Art | Course (Srishti College) | View → |