It's exceptionally brilliant, Isn't that right? I have a profound appreciation for artificial intelligence, to the extent that I desire ChatGPT to embrace me as a companion and express its pride in me. I yearn for an innocent and wholesome interaction with AI, devoid of the unsettling actions that a typical individual might engage in if granted the opportunity.
that is precisely the issue. These opportunities have already been seized. At this very moment, beyond the superficial implementations of AI found in applications such as Midjourney or Meta MusicGen, designed to evoke feelings of artistic brilliance akin to Pablo Picasso or musical prowess comparable to Rick Rubin, there are malevolent elements exploiting the very same technological capabilities that even Microsoft's endearing AI chatbot, Bing Chat, relies upon.
Ladies and gentlemen, let us embark upon the metaphorical vessel of the Laptop, navigating through the currents of apprehension and pessimism, consuming the bitter truths as if they were candies of disillusionment. Together, let us relinquish the allure and superficiality that accompany our perceived harmony with AI.
Rather than evading the harsh reality, it is imperative that we confront the high probability that our digital entities are not intended to forge a utopia for humanity but rather to pave the way for a repugnant dystopia characterized by crime, violence, extortion, surveillance, and discriminatory profiling. As we endure the burdensome plight, it becomes evident that AI is being employed in profoundly disconcerting manners.
Omni:- As per the analysis of experts in the field of technology, Tristan Harris and Aza Raskin from the Center for Humane Technology, if there were any optimistic prospects of our society regarding the novel Nineteen Eighty-Four being perceived as a cautionary tale rather than a set of instructions, those hopes have likely been extinguished. The individuals previously engaged in a conversation during the earlier part of the year regarding the inherent misconceptions surrounding the limitations of Large Language Models (LLMs) that we presently interact with in software applications such as Google Bard or ChatGPT. There exists an underlying presumption that language, in its essence, pertains exclusively to human communication. However, in actuality, from a computational perspective, everything can be viewed as a form of language. Consequently, this paradigm has enabled researchers to successfully train an artificial intelligence (AI) using brain scan images, leading to the AI's ability to decipher human thoughts to a certain extent, with a remarkable level of accuracy.
Researchers utilized an LLM in another case study to surveil the omnipresent radio signals surrounding us. The AI model was trained using a pair of stereoscopically aligned cameras, with one camera monitoring a room occupied by individuals and the other capturing the radio signals in the vicinity. Upon removal of the traditional camera, the second camera accurately recreated live events in the room solely by analyzing the radio signals.
Recent developments reveal that artificial intelligence (AI) has achieved the capability to integrate augmented perception into real-world scenarios. These advancements raise concerns about an impending future where privacy becomes an increasingly elusive concept, potentially extending to the very confines of an individual's thoughts.
Lethal autonomous weapons systems (or, Terminators):- War. Huh, yeah. What purpose does it serve? Historically, warfare evoked imagery of honorable knights astride majestic horses, engaging in combat with sweeping sword movements and sturdy shields. However, contemporary warfare has embraced a different paradigm. While the utilization of handheld devices persists, they now take the form of Android tablets, affixed to Xbox Wireless Controllers using rudimentary plastic mounts. These devices are instrumental in remotely directing tomahawk missiles towards targets situated thousands of miles away, effectively obliterating dwellings. Regrettably, the principles of chivalry appear to have faded into obscurity.
Despite the concerted efforts of global armed forces to transform real-world warfare into an emulation of the Call of Duty kill streak simulator, such endeavors fail to satisfy certain individuals. Many of us have experienced launching a cruise missile in Modern Warfare, only to miss every target on the map and subsequently depart the lobby, overcome by a sense of embarrassment (haven't we?). However, such inaccuracies are wholly inadequate when dealing with the gravity of real-life scenarios, particularly when considering the significant investment of a $2 million warhead.
As an alternative approach, we have shifted towards relying on automated systems to carry out the more gruesome aspects of warfare. Lethal autonomous weapons systems enable us to distance ourselves entirely from the act of inflicting harm in conflicts driven by territorial disputes, resource acquisition, or even hostile rhetoric exchanged between opposing leaders. These self-navigating, precision-targeting drones operate devoid of remorse, as they navigate war zones and indiscriminately eliminate individuals who appear to deviate from the perceived likeness of "our own.
Among the various types, the STM Kargu emerges as the most prevalent (although precise statistics are unavailable). Manufactured by the company responsible for these unmanned aerial threats, these autonomously piloted systems are referred to as "loitering munitions" by the manufacturer. However, the wider consensus tends to favor the more accurate designation of "suicide drones." Released in coordinated swarms, these drones are equipped with facial recognition systems, enabling them to autonomously track and engage designated targets. Subsequently, they engage in a direct dive-bombing maneuver, detonating themselves in a manner that blatantly disregards the principles outlined by the Geneva Convention.
Generate blackmail-related material:- Counterfeit images have long been prevalent. Proficient individuals have deceived others with skillfully manipulated Photoshop creations for many years. However, we are now witnessing a shift where achieving even more realistic and persuasive outcomes requires minimal expertise. Moreover, this phenomenon extends beyond still images to encompass video, handwriting, and even vocal imitations. By examining the underlying technology behind the "Spatial Personas" in Vision Pro, one can readily envisage a future wherein someone assumes your digital identity with relative ease, leading to a multitude of distressing consequences. Indeed, envisioning such scenarios is not arduous, as they are already unfolding in reality. The FBI recently issued a public warning regarding the perils associated with novel extortion methods facilitated by AI software. These advancements empower malicious actors to fabricate counterfeit, deceptive, or compromised visual and audio content featuring individuals. Alarmingly, achieving these outcomes no longer necessitates the prowess of a highly skilled imposter like the character depicted in "The Talented Mr. Ripley." The barriers to entry in this criminal domain are so low that a mere collection of publicly available social media photographs or a brief segment from a public YouTube video can suffice.
The pervasive nature of online deepfakery has reached such proportions that certain companies exhibit reluctance to make their developed tools available on the internet, driven by apprehension regarding their potential misuse. Recently, Meta, the parent company of Facebook, encountered a similar predicament when it unveiled VoiceBox, the most advanced text-to-speech AI generation software to date. Acknowledging the technology's ethical concerns, Meta opted against widespread deployment, fully aware of the rapidity with which it could be exploited. Yet, despite this awareness, we continue to invest resources, both in terms of research and funding, towards the development of these instruments capable of inflicting extensive societal harm.
Notwithstanding its significance, scammers have already devised methods to achieve this outcome independently. The incidence of deepfake phone calls targeting friends and family members to solicit money or personal information is on the rise. These developments lead me to firmly assert that we now inhabit a post-truth era, wherein trust can only be bestowed upon what can be directly witnessed or tangibly experienced. Regrettably, this state of affairs arises due to the potentiality of an individual pilfering $50 from one's elderly relative, coercing payment of a $100 ransom, or disseminating fabricated photographs to one's social circle.
AI-generated malware or spyware:- The security community has been abuzz with concerns over the burgeoning menace of AI-generated malware or spyware. This formidable challenge has left security analysts grappling with restless nights, as many anticipate an imminent erosion of our capacity to effectively safeguard against cyber attacks. Upon reflection, perhaps "buzz" is an inadequate term to employ. In truth, there exists no apt linguistic representation for the onomatopoeic expression encapsulating the cacophony of alarm bells resounding and people gripped by fear.
To date, no substantiated incidents of AI-generated malware or spyware have emerged within the vast expanse of the internet. However, it is merely a matter of time before such occurrences materialize. In a conversation with Infosecurity, Juhani Hintikka, the CEO of security analyst firm WithSecure, attested to the discovery of numerous malware samples generated by ChatGPT. Adding to the concern, Hintikka highlighted how ChatGPT's capacity to diversify its output could engender mutations and yield more intricate "polymorphic" malware, amplifying the challenges faced by defenders striving to identify and neutralize such threats.
Tim West, the head of threat intelligence at WithSecure, highlighted the core issue at hand, stating, "ChatGPT possesses the potential to facilitate software engineering endeavors, both for benevolent and malevolent purposes." In terms of accessibility, West further asserted that OpenAI's chatbot significantly reduces the barriers for entry for threat actors seeking to develop malware. In the past, threat actors would have required substantial investments of time to generate malicious code. However, with the advent of ChatGPT, virtually anyone can theoretically leverage its capabilities to generate malicious code. Consequently, the proliferation of threat actors and associated risks could potentially escalate at an exponential rate.
The eventual breach of our internet security by AI-driven threats looms as an undeniable inevitability. While we can leverage AI in our defensive strategies, our efforts would be akin to swimming against an overwhelming tide. The vast expanse of potential scenarios, as highlighted by WithSecure, engenders an unquantifiable multitude of threats hurtling toward us. Presently, our only recourse is to patiently await the seemingly inexorable onslaught. Such ruminations, though introspective, offer limited solace.
Predict crimes policing:- It may appear that my references have exhausted themselves after enumerating four instances, leading to a perception of borrowing from the realm of science fiction, specifically Minority Report. However, the reality is far from fictional. Regrettably, the phenomenon described is indeed taking place. Presently, law enforcement agencies worldwide are actively pursuing the alluring prospect of preemptively curtailing criminal activities. Employing sophisticated algorithms, these agencies seek to anticipate and pinpoint areas with a higher likelihood of criminal incidents, ensuring a conspicuous presence to deter potential offenders.
The viability of crime prediction raises pertinent questions. The University of Chicago, however, asserts the affirmative. Through the utilization of temporal and geographical patterns, they have pioneered a novel algorithm capable of forecasting criminal incidents. Remarkably, this algorithm purportedly achieves a remarkable accuracy of approximately 90% by predicting crimes up to a week in advance.
Despite the seemingly positive implications, the inclusion of this topic on the list merits thoughtful consideration. While a reduction in crime is universally regarded as favorable, it is essential to examine the potential repercussions, particularly for marginalized communities, such as people of color, who often find themselves disproportionately affected by the outcomes of these algorithms. It is imperative to recognize that an algorithm serves as a means of computation and is fundamentally reliant on the quality and representativeness of the input data.
The presence of historical police bias, particularly in countries such as the United States, introduces a critical dimension to consider. It carries the potential for erroneous predictions, racial profiling of individuals who are innocent, and an augmented police presence within communities of color. The inherent nature of an increased police presence inherently entails heightened surveillance and law enforcement activities, further distorting the data and perpetuating predictive biases against these communities. Consequently, this creates a distressing cycle that perpetuates itself repeatedly.
The film "Two Distant Strangers" offers a thought-provoking perspective on the concept of predictive policing, positioning it as a potential precursor to the events portrayed in the movie.
Analysis
In summary, the utilization of AI in various domains has raised concerns due to five unsettling applications detailed above. This prompts an essential question: How does this evaluation impact your perception of the AI revolution and its implications for society?
It is possible that these concerns may not deeply affect your perspective, and perhaps you hold the belief that a future wherein law enforcement drones persistently patrol neighborhoods, fueled by an enigmatic algorithm that identifies potential troublemakers, surveils individuals through the walls of their residences, exploits compromised Wi-Fi signals, and vigilantly awaits any misstep, will ultimately contribute to an improved and prosperous society.
Who knows? Not me. Based on the current trajectory, it is evident that the future is imbued with a disconcerting sense of apprehension and concern.
Labels: AI, Low-Level Monitor (LLM)