Ph.D. Computer Science, UCLA 2007
M.S. Computer Science, UCLA 2002
Caught up in the dot-com madness 1993 - 2001
Mobile 3D character platform Mobile platform for quickly generating chat- or interaction-based virtual humans/characters. Allows fast scripting of complex functionality such as speech, nonverbal behavior, lip syncing to speech.
Fast Avatar Capture Software is a tool for automatically capturing a 3D avatar of a human subject in less then a few minutes without a need for a separate operator. From Evan Suma, Andew Feng, Richard Wang, Ari Shapiro
Autorigger and reshaper is a tool for automatically rigging, skinning and reshaping a 3D human body scan obtained from a RGB-D sensor (such as Microsoft Kinect, Intel RealSense, Occipital Structure Sensor and the like) or a 3D scaning cage.
SmartBody is a character animation system that gives an interactive character an extensive set of cpabilities and behaviors, such as: locomotion, steering, object manipulation, speech synthesis, emotional expression, gesturing, physical simulation, gazing among others.
The DANCE software is used for physics-based animation research, including dynamic simulation of rigid bodies, motion capture and dynamic control.
S. Narang; A. Best; T. Randhavane; A. Shapiro; D. Manocha, PedVR: Simulating Natural Interactions between a Real User and Virtual Crowds, 22nd ACM Symposium on Virtual Reality Software and Technology (VRST), Munich, Germany, November, 2016 (paper, project)
G. Lucas, E. Szablowski, J. Gratch, A. Feng, T. Huang, J. Boberg, A. Shapiro, The effect of operating a doppleganger in a 3D simulation, ACM SIGGRAPH Conference on Motion in Games, San Francisco, CA, October, 2016 (paper) Best Presentation award!
S.H. Kang, A. Feng, M. Seymour, A. Shapiro, Study comparing video-based characters and 3D based characters on mobile
devices for chat, ACM SIGGRAPH Conference on Motion in Games, San Francisco, CA, October, 2016 (paper)
M. Chollet, N. Chandrashekhar, A. Shapiro, S. Scherer, L.P. MorencyManipulating the Perception of Virtual Audiences using Crowdsourced Behaviors, 16th International Conference on Intelligent Virtual Agents, Los Angeles, CA, September, 2016 (paper)
D. Casas, A. Feng, O. Alexander, G. Fyffe, R. Ichikari, P. Debevec, H. Li, K. Olszewaki, E. Suma, D. Casas, A. Shapiro, Photorealistic Blendshape Modeling from RGB-D Sensors, 29th Conference on Computer Animation and Social Agents, Geneva, Switzerland, May 23rd-25th, 2016 (paper, video)
A. Feng, D. Casas, A. Shapiro, Avatar Reshaping and Automatic Rigging Using a Deformable Model, ACM SIGGRAPH Conference on Motion in Games, Paris, France, November 16th-18th, 2015 (paper, video, software)
M. Papefthymiou, A. Feng, A. A. Shapiro, G. Papagiannakis, A fast and robust pipeline for populating mobile AR scenes with gamified virtual characters, SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications, Kobe, Japan, November 2-5, 2015
S.H Kang, A. Feng, A. Leuski, D. Casas, A. Shapiro, Effect of an Animated Virtual Character on Mobile Chat Interactions, Proceedings of th 3rd International Conference on Human-Agent Interaction, Daegu, Korea, October 21-24, 2015 (video, paper, bibtex)
A. Feng, A. Leuski, S. Marsella, D. Casas, S.H Kang, A. Shapiro, A Platform for Building Mobile Virtual Humans, Proceedings of th 15th International Conference on Intelligent Virtual Agents, Delft, Netherlands, August 26-28, 2015 (paper, bibtex, software)
A. Feng, G. Lucas, S. Marsella, E. Suma, C.C. Chiu, D. Casas, A. Shapiro, Acting the Part: The Role of Gesture in Avatar Identity, ACM SIGGRAPH Conference on Motion in Games, Los Angeles, CA, November 6-8, 2014 (paper, video)
E. Miguel, A. Feng, A. Shapiro, Towards Cloth-Manipulating Characters, The 27th Conference on Computer Animation and Social Agents, Houston, TX, May 26-28 (paper, video)
A. Shapiro, A. W. Feng, R. Wang, G. Medioni, H. Li, M. Bolas, E. Suma, Rapid Avatar Capture Using Commodity Sensors, The 27th Conference on Computer Animation and Social Agents, Houston, TX, May 26-28 (paper, video)
A. W. Feng, Y. Huang, Y. Xu, A. Shapiro, Fast, Automatic Character Animation Pipelines, Journal of Visualisation & Computer Animation (paper preprint, bibtex)
Y. Xu, A. W. Feng, S. Marsella, A. Shapiro, A Practical And Configurable Lip Sync Method for Games, ACM SIGGRAPH Conference on Motion in Games, Dublin, Ireland, November 2013 (paper, video, bibtex)
L. Batrinca, G. Stratou, A. Shapiro, L.P. Morency, S. Scherer, Cicero-towards a multimodal virtual audience platform for public speaking training, 13th International Conference on Intelligent Virtual Agents, Edinburgh, UK, August 2013 (paper, bibtex)
A. Hartholt, D. Traum, S. Marsella, A. Shapiro, G. Stratou, A. Leuski, L.P. Morency, J. Gratch, All Together Now: Introducing the Virtual Human Toolkit, 13th International Conference on Intelligent Virtual Agents, Edinburgh, UK, August 2013 (paper, bibtex)
S. Marsella, A. Shapiro, A. W. Feng, Y. Xu, M. Lhommet, S. Scherer,Towards Higher Quality Character Performance in Previz, Digital Production Symposium, Anaheim, CA July 2013 (paper, bibtex)
S. Marsella, Y. Xu, A. W. Feng, M. Lhommet, S. Scherer, A. Shapiro, Virtual Character Performance from Speech, Symposium on Computer Animation, Anaheim, CA July 2013 (paper, video, bibtex)
A. Shapiro, A. W. Feng, The Case for Physics Visualization in an Animator's Toolset, 8th International Conference on Computer Graphics Theory and Applications, Barcelona, Spain, February, 2013 (pdf, bibtex)
A. Feng, Y. Huang, Y. Xu, A. Shapiro, Automating the Transfer of a Generic Set of Behaviors Onto a Virtual Character, The Fifth international conference on Motion in Games, Rennes, France, November, 2012 (pdf, video, bibtex) Best Paper award!
A. Feng, Y. Huang, M. Kallmann, A. Shapiro, An Analysis of Motion Blending Techniques, The Fifth international conference on Motion in Games, Rennes, France, November, 2012 (pdf, video1, video2, bibtex)
A. Feng, Y. Xu, A. Shapiro, An Example-Based Motion Synthesis Technique for Locomotion and Object Manipulation, Symposium of Interactive 3D Graphics and Games, Costa Mesa, CA, March 2012 (pdf, video, bibtex)
A. Shapiro, Building a Character Animation System, Invited Talk, Motion in Games, 2011 (pdf, bibtex)
Welbergen van, H. and Xu, Yuyu and Thiebaux, M. and Feng, WW and Fu, J. and Reidsma, D. and Shapiro, A., Demonstrating and Testing the BML Compliance of BML Realizers, IVA 2011, (pdf, bibtex)
A. Shapiro, S.H. Lee, Practical Character Physics For Animators, IEEE Computer Graphics and Applications, July/August 2011 (pdf, video, bibtex)
B. Allen, D. Chu, A. Shapiro, P. Faloutsos, On Beat! Timing and Tension for Dynamic Characters, ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), ACM Press, August, 2007 (pdf, video, bibtex).
A. Shapiro, D. Chu, B. Allen, P. Faloutsos, The Dynamic Controller Toolkit, The 2nd Annual ACM SIGGRAPH Sandbox Symposium on Videogames, San Diego, CA, August, 2007 (pdf, videos), (bibtex)
A. Shapiro, M. Kallmann, P. Faloutsos, Interactive Motion Correction and Object Manipulation, Symposium on Interactive 3D Graphics and Games, Seattle, Washington, April, 2007
A. Shapiro, Y. Cao, P. Faloutsos, Style Components, Graphics Interface 2006, Quebec City, Quebec, Canada, June, 2006.
(pdf, videos), (bibtex)
A. Shapiro, P. Faloutsos, V. Ng-Thow-Hing, Dynamic Animation and Control Environment, Graphics Interface 2005, p. 61-70, Victoria, British Columbia, Canada, May, 2005.
A. Shapiro, F. Pighin, P. Faloutsos, Hybrid Control For Interactive Character Animation, The Eleventh Pacific Conference on Computer Graphics and Applications, p. 455-460, Canmore, Alberta, Canada, October, 2003.
D. Casas, O. Alexander, A. Feng, G. Fyffe, R. Ichikari, P. Debevec, R. Wang, E. Suma, A. Shapiro. My Digital Face, SIGGRAPH 2015 Real Time Live, Los Angeles, California, August 2015 (video)
D. Casas, O. Alexander, A. Feng, G. Fyffe, R. Ichikari, P. Debevec, R. Wang, E. Suma, A. Shapiro. Blendshapes from Commodity RGB-D Sensors, SIGGRAPH 2015 Talks, Los Angeles, California, August 2015 (paper, video)
A. Shapiro, A. Feng, W. Ruizhe, H. Li, M. Bolas, G. Medioni, E. Suma, Make Me An Avatar, SIGGRAPH 2014 Real Time Live, Vancouver Canada, August 2014
A. Feng, A. Shapiro, W. Ruizhe, H. Li, M. Bolas, G. Medioni, E. Suma, Rapid Avatar Capture and Simulation Using Commodity Depth Sensors, SIGGRAPH 2014 Talk, Vancouver, Canada, August 2014 (pdf)
A. Shapiro, S.H. Lee, Practical Character Physics For Animators, SIGGRAPH 2009 Talk, New Orleans, LA, August 2009 (pdf)
J. Bayever, J. Gordon, G. McMillan, Y. Lakhani, J. Mancewicz, A. Shapiro, Making Statues Move, SIGGRAPH 2008 Talk, Los Angeles, CA, August 2008
A. Shapiro, P. Faloutsos, Interactive and Reactive Control, SIGGRAPH 2005 Sketches, Los Angeles, CA, August 2005
A. Shapiro, Y. Cao, P. Faloutsos, Interactive Motion Decomposition, SIGGRAPH 2004 Sketches, Los Angeles, CA, August 2004
A. Shapiro, P. Faloutsos, Complex Character Animation that Combines Kinematic and Dynamic Control, SIGGRAPH 2003 Sketches & Applications, San Diego, CA, July 2003.
Refereed Posters & Demos
G. Lucas, E. Szablowski, J. Gratch, A. Feng, T. Huang, J. Boberg, and A. Shapiro, Do Avatars that Look Like their Users Improve Performance in a Simulation?, IVA 2016, Los Angeles, CA, September, 2016
R. Artstein, A. Gainer, K. Georgila, A. Leuski, A. Shapiro, D. Traum, New Dimensions in Testimony Demonstration, NAACL 2016, San Diego, CA, June 2016
S.h. Kang, A. Feng, A. Leuski, D. Casas, A. Shapiro, Smart Mobile Virtual Humans: "Chat with me!", IVA 2015, Delft, Netherlands, August 2015
D. Casas, O. Alexander, G. Fyffe, R. Ichikari, R. Wang, P. Debevec, E. Suma, A. ShapiroRapid Photorealistic Blendshapes from Commodity RGB-D Sensors, I3D 2015, San Francisco, California, March 2015 Best Poster Award!paper
A. Shapiro, A. Feng, R. Wang, G. Medioni, E. Suma, Automatic Acquisition and Animation of Virtual Avatars, IEEE VR 2014, Minnesota, March 2014 Honorable Mention award!
A. Leuski, A. Shapiro, R. Gowrisankar, Y. Xu, T. Richmond, A. Feng, Mobile Personal Healthcare Mediated by Virtual Humans, Proceedings of the companion publication of the 19th international conference on Intelligent User Interfaces, Haifa, Israel, February 2014 paper, bibtex
E. Miguel, A. Feng, Y. Xu, A. Shapiro, Towards Cloth-Manipulating Characters, ACM SIGGRAPH Conference on Motion in Games, Dublin, Ireland, November 2013
Y. Xu, A. Feng, A. Shapiro, A Simple Method for High Quality Lip Syncing, Symposium of Interactive 3D Graphics and Games 2013, Orlando, Florida, March, 2013
A. Shapiro, D. Chu, P. Faloutsos The Controller Toolkit, Symposium of Computer Animation 2006, Posters & Demos, Vienna, Austria, August 2006
M. Kallmann, A. Shapiro, P. Faloutsos, Planning Motions in Motion, Symposium of Computer Animation 2006, Posters & Demos, Vienna, Austria, August 2006
A. Shapiro, P. Faloutsos, Steps Toward Intelligent Interactive Control, Symposium of Computer Animation 2005, Posters & Demos, Los Angeles, CA, July 2005
A. Shapiro, P. Faloutsos, Victor Ng-Thow-Hing, Dynamic Animation and Control Environment, Eurographics Symposium on Computer Animation, Posters & Demos, Grenoble, France, August 2004
A. Shapiro, Y. Cao, P. Faloutsos, Stylistic Motion Decomposition, Eurographics Symposium on Computer Animation, Posters & Demos, Grenoble, France, August 2004
Book Chapters & Demos
L.P. Morency, A. Shapiro, S. Marsella, Embodied Autonomous Agents, chapter in Handbook of Virtual Environments: Design, Implementation, and Applications, 2015
L.P. Morency, A. Shapiro, S. Marsella, Modeling Human Communication Dynamics for Virtual Humans, Coverbal Synchrony in Human-Machine Interaction, CRC Press, 2013
Alvin and the Chipmunks: The Squeakquel, Rhythm & Hues Studios 2009 (feature film)
The Incredible Hulk, Rhythm & Hues Studios 2008 (feature film)
The Force Unleashed, Industrial Light & Magic/LucasArts 2008 and The Force Unleashed, Ultimate Sith Edition 2009(video games)
A web-based version of the game of Diplomacy. Originally developed by Guy Tsafnat and
myself, this version is written in Java and plugs into a JSP-compliant
webserver. This was used as a testbed for my automated player and can
currently self-play approximately 1000 games/day.
Sacramento 2001 for my friend's bachelor party. This part of the trip was called Chunder. Not all of us made it through the falls. Here's the entire sequence if you'd like to see it.
Co-presented the DocOn protoype app at the USC Body Computing Conference at USC. My team created a 'digital world expert in cardiology and atrial fibrulation' from Dr. Leslie Saxon. The idea is to 'scale' the reach of a world expert in various health areas and provide information to those that do not have access to such resources. The digital doctor was built using our Rapid Avatar pipeline, the mobile app built using our mobile virtual humans.
Started my appointment as research faculty at the University of Southern California in the Viterbi School of Engineering.
I was a keynote speaker at the Computer Graphics International (CGI) conference in Crete, Greece. http://www.ics.forth.gr/CGI2016. My presentation was entitled "Rapid Creation of Digital Characters"
Our new process for creating a photorealistic virtual character combining bodies and faces using commodity hardware that takes only 20 minutes with no artistic intervention or technical expertise.
Uploaded my Practical Character Physics for Animators talk which covers the need for physics visualization in an animator's toolset with many examples from real films:
February 3rd, 2016
We have released all the software and process needed to scan, rig and create your own avatar in minutes.
January 29th, 2016
Coverage of Dr. Leslie Saxon's initiative to virtualize doctors as a means to provide additional avenues of communication to medical experts. My team put together a prototype of the virtual doctor using our avatar technologies as an economical way to generate a virtual character of specific person. Typically, creating a photorealistic digital representation of a particular person takes a massive amount of 3D expertise and time. We were able to accomplish this in about 2 days, and today that process would take around 4 hours, making such a representation economically viable for a large number of people. You can see my researcher Andrew Feng with Dr. Leslie Saxon doing facial capture.
November 14th, 2015
We have released our tool for automatic rigging and reshaping 3D human body scans for download. You can see it in action here:
September 10th, 2015
Our latest work accepted at the 2015 ACM SIGGRAPH Motion in Games Conference (MIG 2015) showing our automated rigging and body reshaping from RGB-D or photogrammetry scans.
August 14th, 2015
My summer intern, Marco Volino, put together a 100-camera photogrammetry cage based on Raspberry Pis for his summer project in my lab. The result is that we can now consruct a high-quality 3D model from a body scan in about 15 minutes. The video show some results from the system and includes many of USC ICT's summer interns (everybody loved to be scanned and see themselves in 3d...)
August 12th, 2015
Our presentation at SIGGRAPH's Real Time Live called 'My Digital Face'. We scanned, constructed and animated a photorealistic face (of my colleague and co-presenter, Evan Suma) in 5 minutes using a single Intel RealSense sensor.
July 4th, 2015
Photorealistic faces from RGB-D sensors
Results from our rapid blendshapes pipeline using the Intel RealSense sensor. We'll be demonstrating the scanning processing and control of a photorealistic digital face in 5 minutes at this year's SIGGRAPH 2015 Real Time Live event.
April 27nd, 2015
Photorealistic faces from RGB-D sensors
Our latest project showing the generation of a set of blendshapes from a single RGB-D sensor using a near-automatic pipeline. We won Best Poster at the I3D 2015 conference in March for this work.
April 17, 2014
News article from USC's Viterbi School of Engineering on the Fast Avatar Capture and Simulation work:
The full article is here.
February 21, 2014
Here is a Gizmodo article on our Rapid Avatar Capture and Simulation project where we can capture a person using a first-generation Microsoft Kinect system and simulate them in a matter of minutes.
We believe that this kind of capability dramatically changes the economics of avatar capture (essentially, it's now free and takes very little time) and will have an impact on 3D character acquisition and use going forward.
Here's a video of the entire capture process:
I volunteered for a project out of ICT's Graphics Lab. My face and performance were captured by their Light Stage (I was not involved with the technical aspects). Here are the results, as shown on different technological platforms by both Nvidia:
In the days following the capture, I would walk by my colleagues in the Graphics Lab (my office is very close to theirs) and they would be studying me very closely as I walked by. Occassionally they would say things like "You should see what we are doing to 'Ari' today.". In didn't take long before I insisted that they call my digital doppleganger 'Ira' instead of 'Ari' to loosen some of this association between myself and this digital version of myself. You can do what you want with Ira; it gets a little personal when you are doing it to 'me'. I'm sure that as this phenomena of capturing a person and digitizing them, then putting their digital version in various situations will lead to a number of psychological studies, particularly now that the distinction between the two is getting smaller and smaller.
In case you are curious, the 'yogurt parfait' incident came when the director (Oleg Alexander from ICT) asked me to get mad about something so that they could record some kind of emotional expression. About a week before the capture session, I had stopped by McDonalds in the morning for their $1 sausage muffin (substituting sausage for egg) and the $1 fruit and yogurt parfaits, as I had done so a few times a week for the past month. Usually the strawberries are a bit too cold, and sometimes frozen, so I would typically eat the yogurt, and would sometimes not even touch the strawberries, depending on how icy and cold they were. That one day, they gave me an entire plastic cup full of frozen, hard strawberries without a bit of yogurt, which I didn't realize until I left the drive through. I came back the next day, I asked for a refund, then asked the cashier to check the parfaits and make sure that there is enough yogurt in them. This turned into an unpleasant exchange with the manager on duty, who insisted that all parfaits are exactly the same, and that it would have been impossible to get a parfait that lacked yogurt, and refused to check any of the existing parfaits for their yogurt content (I still wanted another one...) I then wrote a complaint to McDonalds via email. They sent me a coupon for a free meal, told me they took my complaint seriously, and told me they would talk to the manager at that restaurant. That was about as much effort I wanted to put into a defective $1 purchase. I went back to that McDonalds several weeks later, and noticed the manager wearing a different, what appeared to be, a more formal, uniform, and the cashier also for the first time refused to substitute sausage for egg anymore in the $1 sausage mcmuffin. So I assume that someone talked to the owner and the manager, and among other things, a decision was made to not allow substitutions anymore. Not sure what happened to the parfaits - I stopped buying them. I suspect that my complaint set in motion a number of things. All in all, I stopped frequenting there for breakfast. So it's nice that Digital Ira can carry on my message without any additional effort on my part (how long do things last on the Internet, these days? Forever?), and stand up for the little guy against the corporate multinational.