Visual Artist, Technologist: Kaiqing Huang
Deputy Producer, Editing: Qiao Xiao, Xiaoqin Zhou
Performer: Jiaming Wang, Lingmin Zheng
Credit: Jiayun Hua, Xinyue Li, Yihao Zhao, Jingjing Zeng
Tutor: Tongzhou Yu, Yongru Lu, Jiaming Wang
"Crossroads-Phantom Opera" is an experimental performance that combines augmented reality with opera performance. It explores the possibilities of integrating cutting-edge technology into traditional performances, based on the classic Chinese Peking opera "Crossroads." In this performance, AR technology is used to recreate one of the core characters as a virtual phantom, who interacts with real opera actors during rehearsals and performances. Additionally, "Crossroads-Phantom Opera" serves as the precursor and initial inspiration for the project "Dialogue of Self-Relationships."
BACKGROUND

The "Crossroads" is a traditional Peking Opera played by martial artists. The context of this play was a fight in the dark inn, which meant that even when facing each other, the two performers needed to act as if they were unable to see each other.
AR PERFORMANCE
In our project, we deliberately made performer do not wear the AR headset, so that he could not see the virtual phantom, which just right restored the status and relationship between the two in the original opera. Additionally, performer needed to memorize the whole set of movements of both parties and realize the interactions during this mixed reality performance.
Time synchronization:
Performance need to ensure the consistency of the display in the eyes of all audience. In addition, we cannot expect and ask all our participants to open the program at the same time to realize this precise requirement.
Solution:
Each device reads the world time, and then all devices are set to trigger the initial animation at the same time.
Performance need to ensure the consistency of the display in the eyes of all audience. In addition, we cannot expect and ask all our participants to open the program at the same time to realize this precise requirement.
Solution:
Each device reads the world time, and then all devices are set to trigger the initial animation at the same time.
Spatial synchronization:
Immersal is used to pre-build venue data into the project. During testing, we employed a pedestal with distinct patterns to test spatial stability.
Tests results:
In a stationary standing state, error is negligible, but drift occurs during movement, increasing up to four centimeters with motion. Auto-calibration corrects this within three to four seconds. Instability begins at around six meters from the stage center. Testing also shows that enabling phone radar improves stability.
Immersal is used to pre-build venue data into the project. During testing, we employed a pedestal with distinct patterns to test spatial stability.
Tests results:
In a stationary standing state, error is negligible, but drift occurs during movement, increasing up to four centimeters with motion. Auto-calibration corrects this within three to four seconds. Instability begins at around six meters from the stage center. Testing also shows that enabling phone radar improves stability.
We conducted a real-time skeletal motion capture test using the AI-based software Move One. In a separate test, we uploaded the video which captured beforehand the movements of the real actor to the Racical platform, where AI was used to recognize the video content.
Xsens' device fixes sensors directly on the motion capture suit to track the exact position of the skeleton. We recorded the movements of the two actors separately, but made sure that all the movements in the second recording exactly matched the movements in the first recording.

