Monday, March 17, 2025

Sluggo the Virtual Puppet: Part 4

Continued from Part 3.

Winter 2025 Review: Sluggo the Virtual Puppet

Sluggo says "hello"

This is a continuing journal of my project to design and build "Sluggo," a physical hand puppet that I digitized, rigged, and developed with hand gesture controls in Unity.

Bringing Sluggo to Life

The model was rigged with a skeleton and the skin was bound with weight paint. The final model was ready to bring into Unity for hand tracking control.

Step 1: Import to Unity
Sluggo in Unity

I exported the model from Maya to fbx format and imported it to Unity 2022.3.37f1. I used the same project that I had done some Mediapipe experimentation in earlier and already had many of the hand tracking control elements elements ready to go.

I had a moment of panic when I tested Sluggo's skeleton and found none of the joints would move any part of his body. Returning to Maya, I found that the "Animation" option was turned off in my export settings, which included skin weights. I re-exported with the option turned on and the mesh operated flawlessly in Unity.

Step 2: Test the Limits
Sluggo out of his "default" pose

I rotated each of Sluggo's joints that I wanted to control to determine the limits of motion and axis of rotation for each. For example, the lower lip joint could move about 75 degrees to close his mouth and each of the spine joints can bend about 30 degrees so that Sluggo can comfortably face the camera or look down.

Step 3: Hand Tracking
Courtesy of Mediapipe

I used the Unity plugin for Mediapipe for hand tracking. This system tracks one or two hands from a visual input like a webcam. The hand is tracked as a list of "landmarks," each representing a specific position on the hand in the 2D visual input. The developer can take those positions and use some math to determine the angle and gesture of each individual finger. For example, I can take the vector between landmarks 17 and 20 to determine the angle of the pinky finger. Once I know that, I can use that data to control the rotation of one of the eyestalk joints.

Step 4: Visual Scripting
My visual script to control Sluggo

I also took this project as an opportunity to try Unity's relatively new visual scripting platform. I am well acquainted with using C# with Unity and I've used Unreal's Kismet and Blueprint visual scripting, but I hadn't tried this one yet.

On initialization, the script tracks and stores the default rotation values of any joints that are controlled by hand tracking (this is all tucked away into a subgraph triggered by On Start). On every frame, the script analyzes the positions of certain hand landmarks (the lower right hand part of the graph) and sets the "bend" value for certain joints accordingly. Also on every frame, the script adjusts the rotation values of the controlled joints according to its current "bend" value (the unlabeled section just above the center of the graph). Each joint processes a subgraph I labeled "MySetRotation" that takes in the game object reference, the default rotation vector, the maximum rotation in degrees, and the bend value as arguments.

Mediapipe also generates a square to track where the hand is in the image, how large it is on the image, and what rotation it is on the image plane. I use this information to set Sluggo's position, rotation (perpendicular to the image plane), and proximity to the game camera. I also track the vector between landmarks 17 and 5 to how much the wrist is rotated and use that data to pivot Sluggo left or right.

Step 5: Completion




Sluggo the Virtual Puppet: Part 3

Continued from Part 2.

Winter 2025 Review: Sluggo the Virtual Puppet

Virtual Sluggo giving you the side eye

This is a continuing journal of my project to design and build "Sluggo," a physical hand puppet that I digitized, rigged, and developed with hand gesture controls in Unity.

Rigging Sluggo

The model was cleaned up and retopologized. Next, it needed a skeleton and skin binding so that the static mesh could be controlled dynamically through animation.

Step 1: Create the Skeleton
Sluggo in the X-ray machine

Most rigging tools are designed to create a skeleton for a humanoid bipedal character. Sluggo has a very different body from a human, so I needed to create a custom rig.

Starting at the root, Sluggo's has a few spine joints to the base of his head, which also serves as the pivot point for his jaw. One joint controls the lower lip, which should flap open and closed when talking, controlled by my thumb. The upper lip is controlled by two different joints, controlled by my middle and ring fingers, which should allow me to manipulate them individually to "scrunch up" his upper face. This effect is used to express frustration in Muppets like Kermit and Elmo.

The two joints from the top of the head from the bases of the eyestalks will remain static. The eyestalks are controlled by my index and pinky fingers.

Step 2: Paint the Weights
The influence of Sluggo's lower lip

Next, I painted the amount of influence each joint has on how the mesh deforms when it moves. Maya automatically set weight paints when the skin was bound to the skeleton, but these were not very good. I ended up flooding all influence weight to the root, then painstakingly hand-painted weights for each joint.

This section of the report only has two steps, but weight painting a deforming, cloth model took some time with tuning and refining to look good in different poses.


"Haven't I been through enough?"

Sluggo the Virtual Puppet: Part 2

Continued from Part 1.

Winter 2025 Review: Sluggo the Virtual Puppet

Sluggo in the digital realm

This is a continuing journal of my project to design and build "Sluggo," a physical hand puppet that I digitized, rigged, and developed with hand gesture controls in Unity.

Digitizing Sluggo

The first step in this multimedia project was to design and produce a physical hand puppet, affectionately named "Sluggo."

Step 1: Scanning
The scanning process

The physical puppet was set on a table atop a water bottle to hold it motionless during the scanning process. A hand scanner was used to digitize the puppet from every angle.

Sluggo the point cloud

The scanning process produced a cloud of colored points. The scan did a good job of capturing the details of the folds in the felt, the points where the pieces were sewn together, and the general color of the puppet. The point cloud needed to be converted to a triangular mesh with a material before further work could be performed.

Sluggo in Maya with 122K+ triangles

The 3D model is highly detailed with more than 122,000 triangles. I felt that performance would be improved in my planned real-time environment if I didn't have such a high-poly mesh to work with. Additionally, I could work more easily to edit the mesh if it was retopologized and quadrangulated (I can triangulate it for use in Unity later). I attempted to reduce the polygon count in Maya, but the command would either fail or produce an unsatisfactory result. 

Step 2: Correcting Errors in the Model with Meshmixer
Sluggo in Meshmixer

Autodesk Meshmixer is a relatively easy-to-use tool designed for use with 3D printing, but it can be repurposed for many types of 3D models. I sculpted the mesh to remove the few errors and aberrations from the scanning process, mostly around the base of the puppet (the end of the "sleeve") and to build up structure that was lost on the eyeballs. Unfortunately, this added a lot of polygonal detail in the areas where I was sculpting, mostly notably in the eyeballs.

Step 3: Retopologizing in Instant Meshes
Sluggo getting a new topology in Instant Meshes

Instant Meshes is a tool used for creating Instant Field-Aligned Meshes, based on a paper published by ACM. This tool helps create a topology that aligns well with the shape of a mesh, with the user able to paint direction lined directly ono the mesh, "combing" the lines of topology in specific directions. This quadrangulated the mesh and reduced to the polygon count, but unfortunately, it also destroyed the work I had done to fix the eyeballs. The ends of Sluggo's eyestalks became thin and pointy.

Step 4: Clean up in Mudbox
Sluggo in MudBox

Autodesk Mudbox is a sculpting tool that allowed me to "puff up" Sluggo's eyeballs and give them back the structure lost in the previous step of the process. I then corrected any errors in the texture material by using the stamp tool directly on the model, which works a lot like the stamp tool in Photoshop. I also flattened out the half circles that comprise Sluggo's mouth and refined other parts of the model.

Step 5: Back to Maya
Sluggo in Maya down to 77K+ triangles

The mesh now has a quad topology and is reduced to 77,000+ triangles. That is still a higher polygon count than I think I'll need, but I know that I want extra detail in the mesh so that it can nicely deform as it animates.

Next Step: Riggging

Sunday, March 16, 2025

Sluggo the Virtual Puppet: Part 1

Winter 2025 Review: Sluggo the Virtual Puppet

Actual Sluggo meets Virtual Sluggo

I recently completed an interactive project to learn more about MediaPipe, a Google-developed set of open source tools used for computer vision tasks such as face tracking and hand tracking. For this project, I designed and built a physical puppet, digitized it, rigged it, and developed a tool in Unity to control it with hand gestures.

The Physical Sluggo

The first step in this multimedia project was to design and produce a physical hand puppet, affectionately named "Sluggo."

Step 1: Research and Inspiration
Inspirational reading from my home library

I immediately took inspiration from the Henson Muppets for this project, which was a great excuse to peruse some of my books on the subject. Jim and Jane Henson and their earliest collaborators pioneered their own new, distinct style of puppetry, breathing fresh life into an ancient art form.

I settled on a "mitten head" style hand puppet, with the lower jaw operated by my thumb, much like Kermit the Frog. This would be made of a soft material like felt or cloth and the character would be malleable and organic. I decided against adding rod hands (like Kermit) or live hands (like Cookie Monster) as I felt that tracking a second hand may be beyond the scope of this project.

Step 2: Design
Cutting felt for two halves of the head and neck

I found some puppet patterns online that I used as a base for designing my own pattern. I assembled the paper pattern together to make sure that everything fit correctly, then disassembled it before I remembered to photograph it for my process report (oops). I chose brown felt as I originally had in mind to create a dog character, inspired by Rowlf the Dog, the first Muppet to become a "star." I traced my pattern onto the felt and cut out each piece.

Step 3: Production

Sewing the felt halves together

A folded, cardstock oval serves as the puppet's mouth (see photo, above), giving the soft felt a bit of structure. I sewed felt loops to the top and bottom of this piece to better give my fingers a grip when operating the puppet. I used craft glue and a needle and thread to assemble each piece together.

This puppet is expressive, but the head is collapsing

I tried to design pointy ears to add expressiveness to my dog, since it didn't have any arms to move around. My index and pinky fingers fit into these "ears" and I can easily move them in different directions.

The only remaining step was to design a pair of eyes, which I thought I would place on the head much like Kermit the Frog. It was at this point that I realized that the soft puppet was just too soft.

Step 4: Refinement
The "skull" designed to fit inside the head

I quickly cut some craft foam into a "skull" to provide better structure to the puppet's cranium. It needed a couple of holes so that I could fit my fingers through to operate what I still thought of as "ears." 



The puppet was nearly complete, but I had a serious problem. This doesn't look much like a dog! This strange puppet looks more like a slug to me, so Sluggo he must be.

Step 6: Sluggo Lives!
Ready for his close-up

I stole some lightweight, air-drying clay from my kids' craft kit to create the eyeballs, adhered to the eyestalks (formerly, ears) with craft glue. Two black circles with a Sharpie marker completed the process, and a new star is born!

Next Step: Digitization
"Digitization? That sounds like it's gonna hurt!"



Saturday, September 7, 2024

New Game Launched and Summer 2024 Research Review

Summer 2024 Review

Taxonomy of Virtual Spaces

  • Click here to play the new Taxonomy of Virtual Spaces - Expanded Edition interactive project now on itch.io! This is a major expansion to my earlier Taxonomy of Virtual Spaces - Prototype project that I developed last summer.
  • The interactive project is expanded with two new side view platformer worlds.
  • Each new game world may be experienced in eight different visuo-spatial configurations.
  • Each game world contains detailed information about the world and its visuo-spatial configurations with example game references.
  • Added a new Main Menu (see above) to load different game scenes and access the credits screen and other information about the game.
  • Added a music soundtrack of licensed music to better set the mood for each game in the project.

Charter on the Preservation of Digital Game Heritage

  • I attended the Save the Games Symposium (21-22 Aug 2024) hosted by the Strong Museum of Play in Rochester, New York.
  • I met and discussed with numerous members of the game preservation and game academia communities and discussed my research with them.
  • I attended some incredibly informative talks and presentations.
  • I was given a private tour of The Strong's board game collection (notably, the Darwin Bromley collection).
  • I got to play a working reconstruction of the Cathode Ray Tube Amusement Device (see above), a mysterious device patented back in 1947 that may be the earliest artifact that approaches being a "video game" (although, this device does not use a video signal). This device was nearly unknown except for its patent, and nobody knew if a prototype had been or could ever be constructed. Justin S. Barber and Volker Klocke have just created a working version and it works. I was one of the first to shoot down a target with my blip of a surface-to-air missile! Barber also showed a connection between the creators of this project and Willy Higinbotham, creator of Tennis for Two, another extremely early development in the "prehistory" of digital games. This is exciting stuff for game historians!

      Monday, August 19, 2024

      Completing the World of Super C

      Taxonomy of Virtual Spaces Part II

      I completed adding environmental geometry and gameplay functionality into my recreation of Area 1, Fort Firestorm from Super C (Konami, 1988).

      Ramps


      The biggest hurdle to recreating this world was the existence of ramps. The platformer character controller script I wrote was based on the controls for Super Mario Bros. That game has no sloped surfaces that the player can walk on at all. All surfaces are sheer walls, floors, and ceilings. When I first tried to use my character controller on a sloped surface, my character just slid uncontrollably down the slope.

      The SMB team did plan to incorporate sloped surfaces, just like in Donkey Kong [source: Satoru Iwata looks over some early SMB development documents and says, "Looking at these specs, it says, 'Add refinements focusing on Donkey Kong's slopes, lifts, conveyor belts and ladders...'" Of that list, only lifts made it into the final SMB game. (Nintendo, 2010, "Volume 5: Original Super Mario Bros. Developers," Iwata Asks: Super Mario Bros. 25th Anniversaryhttps://iwataasks.nintendo.com/interviews/wii/mario25th/4/2/)]. The team decided to simplify things by making the game worlds out of rectangular blocks instead. Slopes were not introduced to the SMB game series until Super Mario Bros. 3 (JP: 1988, US: 1990).

      Debug images show the character's velocity vector as a cyan-colored line

      My platformer character controller needed to be modified so that the avatar could run up or down a slope without it affecting their running speed or their ability to jump. In short, I added a new "ground" collision layer named "ramp." When standing on flat ground, they may run directly to the left or right. When standing on a ramp, all of their movement vectors are rotated to be relative to the ground plane of said ramp. Running left or right moves the player parallel to the ramp's surface. When the player jumps, the surface of the ramp is ignored and all movement is calculated just as if the player character stood on flat ground.

      Background Towers



      Several different types of towers are seen in the near background of the level. Note the graphic difference between the large towers in the original game and the ones in my project. These buildings serve to showcase two aspects of my research:

      - Multiplanar Space

      The towers are in the background of the player's plane of action and cannot be used by the player for navigation (the player avatar cannot jump onto the towers and cannot enter the doorways). However, enemy characters can use the towers and as vantage points and spawn points, as seen in the image above. Implied enemy soldiers even throw grenades from behind some towers (not shown).

      The player avatar can indirectly affect this background layer by shooting. Bullets strike any target within the plane of projection, whether they are in the primary plane of action or in the background layer. The player avatar can collide with and be shot by background enemies as well.

      In actuality, the enemy characters are always on the same plane of action as the player avatar. The background towers and walls serve as parts of the environment that only enemies can stand on. These rules, combined with the way the towers are drawn on the screen, create the illusion of a multiplanar environment.

      - Hybrid Visuo-Spatial Projections

      The towers serve as a good example of the hybrid nature of digital game graphics. This is a subject I've written about many times and it serves as a core feature of my Taxonomy of Virtual Spaces system.

      Digital games, like all digital media, are hybrid in nature. Various symbols appear on the screen, sometimes using different means of projection or perspective, yet the player views all the disparate objects as part of a cohesive whole. (Source)
      All video game imagery is a hybrid collection of smaller images displayed and moved around a screen in order to generate a complete picture of the agents and environment of a virtual world. Those hybrid images may be "seen" from different angles from each other, yet we still understand the image as projecting a single, virtual space. (Source)

      The two different towers shown in the image are projected by different methods. The smaller, boxy towers use the same cabinet oblique projection as much of the rest of the environment. This is a paraline projection where receding lines are all in parallel (see the yellow lines in the image). The larger towers on girders use a naive perspective projection that is similar to 1-point perspective, but the receding lines do not quite converge on the same vanishing point (see the cyan lines in the image). The roof is approximately in 1-point perspective, but the floor uses an oblique projection. The two combine into a naive perspective with opposite oblique projections that approximate receding lines that converge in the distance. This is why the larger towers look strange in my project when rendered in cabinet oblique projection.

      Dream of the Palace (Giotto di Bondone, 1297-1300)

      Naive perspective is often seen in western art before Alberti published Della Pittura (1435), detailing the methods of linear perspective developed by Brunelleschi during the Italian Renaissance. Artists like Giotto painted buildings with roughly converging receding lines (or orthogonals) to approximate a perspectival view.

      Art and Representation (Willats, John, 1997, pg. 64)

      As seen in John Willats' analysis above, there is a sense of a horizon line near the center of Giotto's palace, where the receding lines are horizontal on the picture plane. Each floor of the palace is painted at an oblique angle tilted toward that horizon line. These receding lines do not converge and do not form a linear perspective.

      Level End and Beyond

      The end of the level leads into a large building through what appears to be a breeched garage door. This serves as a transition to Area 2, First Base in which the view becomes planometric naive perspective (with a bird's eye view looking down at the floor). Area 3, Jungle changes again with an elevation vertical oblique projection. Area 4, Inner Base starts with elevation cabinet oblique right projection followed by elevation naive perspective. Area 5, The Cliff starts at elevation vertical oblique projection, then switches to elevation cabinet oblique right projection (note that Area 1 is left projection). All of the first five levels of this game use different projection methods for their environment art assets.

      A question that may be asked here is why? There are several possible reasons. One may be a desire for visual variety. This makes sense with the variety of navigational gameplay in different levels (horizontal side scrolling elevation view, vertical scrolling elevation view, vertical scrolling planometric view). Another possibility is that different artists worked on different levels (although I think that Setsu Muraki was the only artist for game graphics).

      Thursday, August 15, 2024

      Building the World of Super C

      Taxonomy of Virtual Spaces Part II

      My second platformer game world is Area 1, Fort Firestorm from Super C (Konami, 1988). I am creating the world in 3-D after analyzing the level's cabinet oblique projection, which gives some sense of depth into screen space and dimensionality to the game world.

      Comparison between Super C and my project

      As can be seen in the gif above, I have been recreating the game world of Fort Firestorm in 3-D. I spent some time tuning and adjusting the camera settings so now the cabinet oblique projection seen in the game is perfectly recreated.*


      Yoshi's Island graphic distortion analysis by destroyerofseconds Source

      Super Metroid graphic distortion analysis by destroyerofseconds Source

      * Almost perfect, that is. As I've written about before, there is a difference between a "pixel-perfect" image of NES or SNES graphics (such as is displayed on most emulators) and how those graphics would look on a CRT screen. The full graphic resolution for both systems is 256 x 240 (an 8:7 aspect ratio) that would be stretched horizontally, with "wide pixels," to display at a 4:3 aspect ratio on a standard definition television. That means that the 45 degree angle of the receding lines of the cabinet oblique projection in Super C should be at a smaller angle in order to be screen-perfect.

      However, there is evidence that many Nintendo artists did not take this aspect ratio distortion into account when creating their graphics. This is most noticeable with objects that are perfect circles. As seen in the images above, the circles in these two first-party Super Nintendo games look correct at 8:7 ratio, but become distorted at the 4:3 ratio they would be seen in by the player (note that NES and SNES both have the same graphic resolution).

      Sluggo the Virtual Puppet: Part 4

      Continued from  Part 3 . Winter 2025 Review: Sluggo the Virtual Puppet Sluggo says "hello" This is a continuing journal of my proj...