Compare & Contrast - Nature vs Technology

Published on December 17th, 2022

Introduction – How we got here

When time progressed in the days of old, the representation of dates was done either mentally or physically. Starting from the late 20th century, though, there had been a third additional objective way to tell time from the technological advancements; a (signed 32-bit) integer, within the digital world. This clock started on January 1st, 1970, and has represented a new era of life that continues to this day. These digital functions have not only taken away time tellers, though; more people are losing their jobs to robotic counterparts. Even dedicated artists have just become a targeted category as new “AI” functionality (such as out-painting “AI” tools) become more popular. As a result of that and the great depression that was the COVID-19 pandemic, workers’ view of self-worth has either arrogantly increased or humbly decreased, which led the pandemic years to have a dropout rate increase of 65% compared to pre-May 2019 (weforum.org).

These advancements are analyzed from both angles, with either side coming to its extremes. Those who are supportive of this revolution have overlooked the fact that jobs have been destroyed by this problem, and the extremists on this side are overly optimistic that all of the flaws would get resolved. On the other hand, we have those that have ignored all the work that has come out of it as well as those nostalgic pessimists who think that we should return to the good old days. What most people don’t realize is that technology and real workers could co-exist without having one replace the other or considering it a “compromise”. The debate is relative to the area one is working in.

Precision – Creativity vs Consistency

The digital age has been an excellent time to demonstrate new creative minds, whether it’s the ability for art pieces to be shared worldwide or even animation using models rather than having things hand drawn. 3D elements were either imported into 2D animation (such as for “Kirby: Right Back at Ya!” as part of the main elements or JoJo’s Bizarre Adventure) for the purposes of saving animators time, or even exist in its own entity (such as for Toy Story, the first hour-long animated film). These raised the bars for years to come, and films like Klaus (a Netflix 2D animation designed to look 3D) were able to live up to it thanks to their precise control of the animation style. This animation style needed the basic 12 principles of animation (documented in a book by Disney titled “The Illusion of Life: Disney Animation”), something that although tools like CACANi (a tool dedicated to helping animators design in-between frames) can assist, ultimately needed direct human involvement. Unfortunately, many view AI utilities like Dane and Rife as another tool that could be combined, even though they don’t leave any control to the animator. Instead, it looks at the rendered final version rather than the vector input that went into them, meaning that only four of the initial twelve rules are retained (which only happen because of technicalities).

The differences between implementing AI at its rendered state against when it’s still in separable layers is crucial to avoiding blending of the action and the background. This ensures that the end result of the animated object always maintains the same shape and form, which then ensures the animation between it and key frames to remain flowing. To be careful, animators would sit frame-to-frame to make sure that every image can be paused and looked at without having any key frame for it to try and adapt to. However, even when AI is handling just key frames, animating in-between from an AI tries extra hard to adapt to the key frames rather than be a natural flow, as evidenced when giving it a 12FPS animation (where the animator decided to animate on twos). This can really be observed in shots that are intended to be quick and display power, which now lost its effect.

While the argument between AI and humans has mainly been involved in areas where humans have been doing the job for years, AI/CPU’s have also started going into video gaming which currently only exists digitally. Developer Sethbling displayed what would happen if a Neural Network attempted to play different levels of Super Mario Bros. and Super Mario World, both of which released prior to advanced movement options being implemented (such as the Wall Jump or Triple Jump). This AI was titled MarI/O, which although has beaten stages that one simply needed to walk to the right, struggled immensely when it came to deviating from the norm. Indeed, the developer pulled the plug on levels that required patience, letting go of the button or using an enemy rather than defeating it.

Once the instruction set for a computer is compiled, though, it would not deviate from the task. For things that need objectivity and do not require artistic intent or creativity, the precision gained here is crucial for sticking to facts and avoiding slip-ups. Physical movements are a perfect example, where human hands rely on eyes to make precise movements, making perfect builds difficult. On the other hand, a robot knows how to stop by keeping an internal clock that can measure time based on what counts as an actual second rather than “enough time has passed”. Tying it back to animation, parallax scrolling art handled at a robotic level ensures that each layer can display the full image at its intended speed. It may also come off as a benefit for single images, considering that it could adapt certain art pieces to correct any art-style differences. This is important for comic books, where each product is collectively part of another drawing that intends to be part of a grander plot. An example of a comic book issue that doesn’t succeed because of the art-style difference is IDW’s Sonic the Hedgehog comic book series’ 21st issue, which has panels drawn by three individual artists (Lamar Wells with Reggie Graham, Jennifer Hernandez, and Priscilla Tramontano) with varying different art styles.

The best example of combining the precise technology with the creativity of a human comes in the form of video games. Since one could always fine-tune the inputs, this allows one to create the “perfect run” of a video game. The perfect runs could even exploit frame-perfect bugs that developers would not patch due to not being physical for a human player. These glitches are often used for TAS (tool-assisted-speedruns) for the purpose of completing the game in the fastest time possible, which although creates tricky to detect problems in the speedrunning community, they can be used to study the engine.

Dependencies - Health vs Replaceables

Trading these laborious tasks that AIs and such excel at wouldn’t even come at such a big trade-off for dependencies, considering that electric devices are replicable and replaceable. There is no way to patch a human, say, from an illness (even with vaccines), while patch notes can come out for a computer on a whim. Where humans can just happen to encounter something that messes up their immune system, most electronic viruses are targeted and would have to stem from the wrong code being purposely injected, whether from a remote or physical access exploit. Also, because a machine is dedicated to its job and doesn’t need to deal with things such as social life, it will not call in sick or need health services because it’s been overloaded with tasks. This dependency on having a healthy mindset might even become a setback if there is a protest of work practices. For machines, however, it is as simple as multi-threading the CPU to ease the workload on a single thread. For intensive math, a GPU could be installed and used instead, thereby offloading it from the CPU entirely. Furthermore, the precision of a machine mitigates the rebellion issue, by being able to hard-code certain attributes.

However, humans are much more dynamic in terms of how one could get them into power and said power lasts more than a machine. Where machines require specific charging methods and specific types of batteries, humans have multiple energy sources in the form of food and drinks with the additional advantage of lasting longer than most typical powerful devices. Where phones rely on the OS to be optimized, humans are already optimized to ration food when running low. Working hours on humans also demands less than a phone, considering how a device needs to stay online to wait for its response to start working while men are charging much more effectively by sleeping and waking up using an alarm clock, thereby not having to run any calculation of time itself and could fully shut itself down. There are even some tasks dependent on humans for the sake of it, whether religiously or legally.

Furthermore, these programs need to be sourced from a human who figured out the formula to create said labor. Without a guarantor on how the labor should work, how would one be able to ensure that no bugs occur? Thereby, although the ability of computers to do labor can be multi-tasked better, healthy humans still take the cake for being able to outlast most devices.

Examples in Fiction

The war between nature and technology has always been seen within media, especially in fiction. As a side-example, the video game Mother 3 focuses on the issue, as proven right from the logo that displays the contrast between a metal sphere and a blue planet. The plot surrounds a family that was torn apart when the mother died and the kid went to chase down the murderer. This was all done thanks to Porky, a man who infused with technology to jump to this future and last for ages thanks to the lesser demands of technology. When analyzing the world he seeks to build, though, everything feels static without variation; everyone is always praising him using the same words and the restaurants would only serve his favorite foods. The transformed animals were the only one that had variation, because they retained a large part of their free will.

WALL-E is an example of a fictional media that takes the debate from above and analyzes it from the opposite perspective. Although technology does not need as much resources as humans though, it’s still a resource need dealing with nonetheless. The society of their world decided to make technology the priority of their environment rather than the natural world, which withered away into obscurity. However, this technology backfired on them when humans wanted to regain control and return home; after all, it just needed to follow protocol on what the former presidents said, even if the new one didn’t want to listen to what was said back then.

Beyond just individual movies/pieces of fiction, though, comes entire franchises built around this concept. Although the repetitiveness of the trend leaves a lot to be told when it comes to storytelling, it is still relevant each time it’s told over due to society failing to learn. The one that isn’t afraid to punch when needed is the “Sonic the Hedgehog” series, whether it’s by analyzing the narrative-driven plots or simply looking at the graphics of the video games. For the purpose of overemphasizing the issue, the comics have done wonders with two villains; Surge the Tenrec and Kitsunami the Fennec. Both of them are the product of a villain by the name of Dr. Starline, after realizing that humans just needed a leash (read: “mind chip” and “hypnosis”) to do what he would like them to do. This came about after his human idol banished him, yet witnessing two other powerful anthropomorphic-being taking down an a robot by creativity not possible by a robot. However, these two were prone to the same emotional feelings that a human has, and used the power given by their cyber-enhancements to rebel against the doctor. This is the reason the main Doctor (Eggman) had always used robots; no “hunger, illness or freewill” to not comply.

Conclusion

Whether it’s backstabbing from humans or inefficient robots, Dr. Starline never found the perfect balance. Unfortunately, this balance simply doesn’t exist, because each side has its benefits and downsides. Thereby, although there is nothing that can capture both, there is no rule that says one cannot have both. With humans being geared more towards creative arts and machines being geared to replace objective labor, neither one will be replacing the other any time soon for their specific field. It is true that the field of objective labor used to be dependent on humans, but rather than look at it as a form of replacement, it should be viewed as a way to make humans use their talent on things that matter. Being a human with talent had never been a bad thing, hence why labor has no place for us.