On Artificial Intelligence: 15 Observations from Design Researcher
New Version Updates: This marks the second iteration of our article. Initially published on LinkedIn, it has undergone refinement based on valuable feedback received. Each observation now comes equipped with a concrete example and actionable practice, enhancing its practicality and usability. There are two artifacts coming so subscribe to my newsletter to be in the loop!
The idea of this article started with me reading @mznmel ’s retweet, where everyone on Twitter and LinkedIn was leaving their opinions about this technology. Many were in favor out of enthusiasm for all the powers ai offers. While I share this enthusiasm, in parallel, I am concerned about numerous issues surrounding this life-changing technology in our lives, policies, and projections. I am reading Timothy Snyder's book, On Tyranny: Twenty Lessons from the Twentieth Century, and both events inspired me to write this article using the same format of his book and borrowing a couple of titles and points I found related. I have a hunch I will visit and revisit this and I will read this in a few months in the same embracement of an adult looking at their early 2000s fashion choices. Before you proceed, let’s define an observation. An observation is a judgment on or inference from what one has observed. Here are 15 observations on artificial Intelligence from a Human-Computer Interaction Designer and Researcher working in the technology market.
1. Do Not Obey in Advance
The futuristic AI technology is here in the present; nevertheless, which makes us excited not only to watch, but also to be part of it. I understand. At the same time, we need to reflect on our participation and the impact it has on three fronts: the individual, the group, and the future state. The term anticipatory obedience is defined as adapting instinctively, without reflecting, to a new situation. This hits home for me when discussing emerging technologies like AI and the corporate environment in general. We need to raise awareness about anticipatory obedience. It is now the time to reflect before engaging and setting boundaries, so we do not find ourselves helping and contributing to the demise of our privacy and unfreedom. Now more than ever, you need to be vigilant about your rights to privacy and hold yourself before anyone else accountable. Observations #2, #3, #4 are examples.
🌝 Example: Accepting AI-powered surveillance systems or sharing data with AI service providers without considering their implications could lead to widespread privacy violations.
🧩 Practice: Search Anticipatory Obedience. Share with @areejalution things you think you community do that falls under this category
2. Read the Conditions and Terms
Like ticking read conditions and terms box on websites and apps without actually reading, we are doing the same with AI and gen AI technology. I empathize; I work in technology, and I do this. It is because we are bombarded and overloaded with complicated information. Please don't give up your rights because it is difficult or you don't feel like reading and investigating. You must show up for yourself via reading, learning, and asking questions. Take your time to understand and reflect on what it means for you and what it means for all of us now and in the future. You do not need to be an expert in AI; instead, you need to be inquisitive and, when required, critical.
🌝 Example: Ignoring the terms of a social media platform's AI-driven content algorithm may result in unintended exposure to harmful or misleading information.
🧩 Practice: Choose an AI-driven application or service & thoroughly read its terms & conditions rate your understanding. Share with @areejalution Here is couple tools to help you with this first, https://tosdr.org/ and second uses AI to help you https://theresanaiforthat.com/gpt/terms-conditions-reader/
3. Don’t Accept the Giveaways
I translated this title loosely from an Arabic Bedouin proverb that my father used to say about a tribe member joining the wave without fully understanding the consequences or because they're benefiting directly without considering the implications. If you're an old Hollywood movie fan, the character from The Leopard (1963 film) Tancredi Falconeri may ring a bell. Before wearing the hat being distributed for free to everyone, pause and reflect about this affiliation. What is this hat? Who is distributing it, and why? Is it free? What is the message of it? Please don't wear the distributed hat because everyone else is and/or because it is … free. Gen AI is not free.
🌝 Example: Using AI-based facial recognition technology without questioning its potential for misuse or infringement on civil liberties.
🧩 Practice: Research a popular AI trend e.g., Autonomous systems. Critically analyze its potential ethical implications and societal impacts. Finally, think about adaptation. Teach me about an interesting thing @areejalution you' learned
4. Demand Transparent and Easy to Understand Information
Understanding how gen ai and ai models work is difficult, even for tech people. This leads to non-transparency on how these models work and how it reaches its conclusions, especially biased and unsafe ones. To promote transparency on a communicative level, We -people and governmental level- need to demand digestible information and transparent strategies from technology providers. Moreover, independent institutions need to be established to provide independent information sources.
To promote transparency, on a practical level, we need to demand the addition of learning feedback loop built into development lifecycles, where feedback requirements for tracking and reporting of all models' performance, including errors and near misses, are identified, documented, and broadcasted. Moreover, providing an independent review of the models' purpose, proposed analytic methods, anticipated variables, and intended use is also needed to illustrate and enforce transparency.
🌝 Example: Requesting clear explanations of how AI algorithms are trained and how they reach conclusions to avoid biased decision-making.
🧩 Practice: Write to a technology company requesting detailed information about their AI systems' functioning and transparency measures, and assess their response. Share with @areejalution
5. Build Institutions and Defend Them
All the players on the field have a horse in the race. Still the giants are fighting to reach the ai summit. The torch awaits the hand. Therefore, this time is critical to fund independent institutions that aid the individual and governmental entities in reflecting, analyzing, researching, raising awareness, and producing work about AI technology's impact on our lives, jobs, policies, and how the future would be. This would help all sides, whether you're in favor, skeptical, concerned, or against AI. These institutions can tranquilize the worry and fear people have about AI and help them see it in perspective. Also, they will help in avoiding potentially catastrophic consequences of advanced AI systems that are not aligned with human values and/or priorities. We need to build and defend institutions that help us establish a baseline, read observation #6, that brings all perspectives to the dialogue table and diversifies the narrative, so we do not end up hearing one narrative. This creates a much-needed balance.
🌝 Example: Funding organizations that study the societal effects of AI technologies and advocate for responsible development and deployment.
🧩 Practice: Pick one the institutions this list ➡️ Here is a list for you to check out. What activity/initiative you want to support and why Share with @areejalution
6. Defend the Shared Grounds at Any Cost
"There are two ways to be fooled. One is to believe what is not true; the other is to refuse to accept what is true,” Soren Kierkegaard
You can see two approaches to AI-produced content. Approach 1: gen ai is deemed as a panacea where we treat outcomes as certainties. Approach 2: disbelief and untrusting in all that gen ai has to offer. Both approaches hold dangers in different domains. The shared danger in both approaches is in abandoning reality, and facts we are abandoning the shared ground under our very feet: the baseline. We need the shared grounds to establish three things: meaning, alignment, and agreement.
Meaning:
If we believe information produced by AI as absolute certainties, then we leave no room to accept other facts produced. Is this a good state to maintain meaning?
If we cannot deem information produced by AI to be true, eventually, we will lose all the trust. It becomes all spectacle, then what was the objective of this creating powerful technology?
Alignment:
If we believe all information to be certainties, then what is the point of alignment?
If we cannot see and acknowledge different perspectives and we stick to one narrative, then what's the point of creating an intelligence that consumes data like no human can?
Agreement:
If we believe information produced by AI as absolute certainties, then we are erasing agreement and ostracizing all those who disagree with “certainties” what would this do to our societies?
If we cannot see meaning and establish alignment about a certain scope, how can we agree? Without agreeing, how can progress be made? What is the point of a technology that can hinder human progress?
Approach 1, in criminal and healthcare settings, is a determinist approach to ai decision-making that can have dire implications. An example of this would be ai software, PredPol, developed by the Los Angeles Police Department and UCLA to predict when, where, and how crime will occur. To predict and serve? Significance case study noted that the approach disproportionately projected crimes in areas with higher populations of non-white and low-income residents. With ai, you're adopting a statistical perspective on justice which may produce skewed results that replicate and amplify existing biases.
Approach 2 proposes serious problems and potential destruction in media and politics. Synthetic media, AI-generated media content, such as deepfakes, has a detrimental negative impact on the loss of our shared grounds. It creates and distributes false information in a defined narrative with the objective of manipulating public opinion. This is one current example we have today, and more can be produced and created. So if you think my truth vs. your truth conversation is bad now think how atrocious this can get? It is the responsibility of each one of us to maintain meaning, alignment, and agreement process and practices because it represents an abstract community we all can come to, meet, and talk. If we do not protect it, we will enter loops and invite chaos, e.g., fascism in politics.
🌝 Example: Questioning the authenticity of AI-generated news articles to avoid spreading misinformation or contributing to societal division.
🧩 Practice: Fact-check AI-generated content before sharing it on social media, and encourage others to do the same to combat misinformation. Share with @areejalution your favorite tool. ➡️ Here is a list for you to check out
7. Prioritize Justice over, All Particularly over Technological Advances
For science to exist and thrive, we need to establish and maintain a just society. Gen ai systems can inadvertently perpetuate or amplify societal biases due to biased training data or algorithmic design. Since we are creating "smart" systems, let them be smart enough to avoid our mistakes. To eliminate discrimination and ensure fairness, it is crucial to invest in the development of unbiased algorithms and diverse training data sets. When race, gender, sexual orientation, or their proxies are used to inform credit decisions or criminal-sentencing models systematically disfavors a minority group, we will be creating another instance of a system that loops and traps individuals. Therefore, we have a need to have statistically significant input variables reviewed to validate usability, distribution of model results (e.g., scored records) independently, and clear roles and responsibilities for maintaining a view of regulations and their applicability to data management.
🌝 Example: Ensuring that AI algorithms used in hiring processes do not discriminate against marginalized groups based on biased data.
🧩 Practice: Contact your current favorite AI product and inquire about their bias detection and mitigation techniques in decision-making processes. Share with @areejalution the answer and tell me did it make sense to you?
8. Identify Ethical and Unethical Use Cases
We need to identify use cases that illustrate AI applications in ethical and unethical ways. It is all fine when we speak in an abstract way. We agree on the bold lines, but when we -whether we are individuals, companies, or governments- find that we are not fine with the fine lines. The devil is in the details, they say, so we need solid use cases to help us navigate the details. AI is built out of models that learn, relearn, and evolve via systematic tracking, reporting, and root cause analysis of errors, near misses, and overrides. We need to use cases that target the mechanism needed to address ethical framework at these technical touch points. "Technical touch points" are where the details live. Kindly read observations #6, #9, #10, and #12 to illustrate this observation with examples.
🌝 Example: Assessing the potential consequences of AI-powered predictive policing algorithms on minority communities' rights and liberties.
🧩 Practice: Read use cases about AI algorithms used in hiring processes. How do you think one can ensure that the data training the AI models are not biased causing discrimination against marginalized groups. Share with @areejalution your ideas.
9. Build a Human Technology Environment Design Integrated Process
It is about humans and the environment. Namely, business people, product managers, designers, and developers, need to be aware and responsible for a safe technology-environment-design process. The creation of detailed model testing and guidelines that enable us to test across a wide range of scenarios is vital because any gaps can lead to disastrous results. How would we be able to deal with ("unintended consequences," this is the corporate parallel to collateral damage in politics) of technology-environment malfunction? Entertain the following:
Autonomous vehicles that rely on real-time data that end up being unavailable due to connectivity issues. What happens, then?
The customers' sensitive personal and financial data is stolen by an external actor. What happens, then?
Consider the ai or gen ai products that are made for agricultural purposes when mistakes are made in our food resources. What happens, then?
AI-driven autonomous weaponry where human intervention is not permitted raises concerns about the potential loss of human control in critical decision-making processes.
🌝 Example: Obliging and enforcing the incorporation of user feedback and diverse perspectives during the design and testing phases of AI systems to address potential biases or usability issues.
🧩 Practice: Participate in a focus group or user testing session for an AI-driven product or service, and provide feedback on its usability and ethical considerations. Share with @areejalution your experience.
10. License Design and Enforce a Code Ethics
The role of a designer in so many well-educated and well-diverse people's heads is still unfortunately shaped by the 90’s and early 2000’s when the internet became public, which is someone who uses shapes, colors, icons, centralized content, and uses the word gallery. Then, it evolved from graphic design and web design to UX/UI designer and we keep explaining what UX vs. UI mean. Design thinking was popularized and everyone is a designer now! Just take this two-week “fun” training. While we still didn’t resolve that, CX and combining marketing and design has been added to the mix in recent years. This is me in one breath describing some of the popularized design trends/methodologies that I as HCI researcher and designer had to navigate and jobhop through. But designers nowadays design products/services/experiences that live in the environment, not interfaces nor pretty slides. There is an interaction and an impact on humans, living beings, and the environment. This is complex. This is different from the typical visual design job. Imagine that designers do this today without an established and agreed on standards, design ethics, and no design licensing. (This opinion is heavily influenced by Mike Moreno, which I adapt.)
Ethical design refers to design that resists manipulative patterns, respects data privacy, encourages co-design, and is accessible and human-centered.
In fact, the industry “mentors” instead of qualifying design skills in a 2-4 week bootcamp then they ask them to design products, services, and experiences that impact humans and live in our economy. Why should you care?
Because at the moment ai and gen ai products are being designed by professionals who have a loose role that can fit anything or nothing depending on the needs of the employer rather than the needs of users, industry, and situations. Does this happen in any other fields like engineering or IT? Do we trust an MBA holder to design software architecture, code, and implement a gov platform or an ecommerce website?
Finally, I will end this rather long observational rant from a cultural identity perspective asking: as the ai fever spreads and gen ai inheriting a selective issues streaming from cultural imperialism that already exists across our data, media, and communications, how with the current design process and designer role, can we address this and rectify the context? Do we have designers that can be accountable to answer such a question? Did we empower them to do so? Did we make sure there is a standard to adhere to in design that is not about aesthetics?
🌝 Example: Requiring designers to undergo training on design and ethical design principles, obtaining licenses (not commercial certifications,) and requiring reporting of any violation to an internal department then to an external one like a third party organization to ensure compliance with design ethics and AI ethics standards. Let’s take it seriously.
🧩 Practice: Design is not about pretty things, so investigate how are the industry-wide ethical guidelines for AI design are being implemented in your favorite apps? Start a conversation under this #Designthatmakesense
11. Maintain the Protection of Individual Privacy and Freedom
As an individual living in these times, you need to start establishing a private life. Easier said than done. The fact is -even before gen ai fever- your information was being leaked, used to make you buy things, and/or manipulated to influence your views about causes and topics. AI feeds on data, and all your data helps the big machine to be smarter. You are still in control by building protection habits. I am borrowing these from Timothy Snyder's: scrub your computer of malware, email is a cloud service that's not private, consider using alternative forms of the Internet, or simply using it less, and have personal exchanges in person. Do not share sensitive information or documents while prompting gen ai no matter how tempting. Your laptop camera can be a metaphor used here. No matter how smart the hacker is, they cannot win over the camera cover. On a larger scale, to mitigate privacy risks, we must advocate for strict data protection regulations and safe data handling practices and support companies and products that do this.
🌝 Example: Encrypting personal data and limiting access to AI algorithms to protect user privacy from potential breaches or misuse.
🧩 Practice: Conduct a privacy audit of your digital footprint and take steps to enhance your online privacy and security settings. Recommend @areejalution’s the Privacy settings changes you did.
NOTE: I will be sharing a privacy audit spreadsheet if you’re interested join my new Abu Sherbet Chai newsletter here!
12. Empower the Human Influence
The consequences of over-dependence and overreliance on ai can lead to diminished empathy, a decrease in the already decreased social skills, less emphasis on human connections, and a loss of creativity, critical thinking skills, and human intuition. If you notice, all these skills are the value proposition we have over ai. As a result, educational systems and programs need to rethink their methodologies and adjust their focus.
We want and need technology that makes us better, not one that would disempower humans. On a system level, human intervention needs to be part of the technology-environment-design integrated process mentioned in observation #9. Along with human intervention metric, dependence and overreliance on ai needs to be in the criteria that disqualify digital products, at least public ones. Maintaining a balance between ai-assisted decision-making and human input is vital to human values and priorities. Finally, don't forget to leave the house, make eye contact with living beings, and have conversations with real human beings. Every now and then, unplug from the big machine and go to nature. Institutions should do more. Meanwhile, you can recharge and rejoice individually.
🌝 Example: Limiting screen time and engaging in face-to-face interactions to maintain human connections and prevent over-reliance on AI-mediated communication.
🧩 Practice: Schedule regular offline activities or social gatherings with friends and family to foster interpersonal relationships and creativity outside of digital environments. Don’t share with @areejalution.
Digital fasting is as important!!
13. Decentralize the Power and Equalize the Economy
AI can contribute to economic inequality and extreme centralization of power by disproportionately benefiting already well-positioned individuals and corporations. Who owns ai and is developing it? Are the skills needed for this game changer available to everyone, especially the minor groups? Who represents the voices of the disqualified? In fact, why are the disqualified, disqualified?
Let's consider this one, job losses due to an ai-driven automation are more likely to affect low-skilled workers, which leads to a growing income gap and reduced opportunities for social mobility. What are the solutions we are coming up with to address this?
Another one, when ai development and ownership are placed in the hands of a small number of corporations and governments, how are we to address the exacerbation of this inequality and limited diversity in ai applications?
We need to encourage decentralized and collaborative ai development to avoid a concentration of power via policies and initiatives that promote economic equity—like reskilling programs, social safety nets, and inclusive ai development that ensures a more balanced distribution of opportunities.
🌝 Example: Supporting community-based AI initiatives and investing in AI education and training programs to democratize access to AI skills and opportunities.
🧩 Practice: Volunteer with organizations that provide AI education and resources to underprivileged communities, and advocate for policies that promote economic equity in AI development. Share with @areejalution your story.
NOTE: I will be sharing more about those type of origanizations soon. If you’re interested join my new Abu Sherbet Chai newsletter here!
14. Create a Market for Gen AI Products that Aligns with Your Interest
This big machine and capitalist economy depend on customers' demand. Take that to the test and create a demand for ai products that prioritize your needs, values, and priorities and protect you from ai risks and threats. This is an observation that targets entrepreneurs, product makers, designers, developers, business people, and consumers. Everyone who participates in shaping this narrative and can be an advocate in organizations. Consumers need to work to ensure no one big company monopolizes gen ai; rather, we have healthy competition across the market with no inclusivity over the data. Fighting fire with fire strategy can be another way to create a balance in the power pyramid.
🌝 Example: Choosing to purchase AI-powered devices or services that prioritize user privacy and data protection over those that prioritize data collection and monetization.
🧩 Practice: Research and compare AI products/services based on their ethical standards and user privacy policies Let’s have a conversation @areejalution
15. Design Legal Regulations and Policies
I end with this final observation of mine, as ink on paper has been how we make things official across times and cultures. We have been using binding words and language to help us establish the checks and balances for providers and consumers, and this needs not only to continue but, more urgently, to evolve. I arranged this observation to be the last one because I want you to consider everything you have been reading in this one observation. I hope you can see it is crucial to develop new legal frameworks and regulations to address all the novel issues arising from ai technologies.
We have numerous topics policies need to cover. I will include here liability and intellectual property rights as well. As a result, legal systems must evolve with technology to protect the rights and interests of all. We need artifacts produced by legal systems such as policies, best practices for secure ai and gen ai development and deployment, and local and international regulations. Then, collectively, we need to foster international cooperation to establish global norms and regulations that protect us against any present or future threats.
🌝 Example: Enacting legislation to regulate the use of AI in sensitive areas such as healthcare and criminal justice to ensure fairness, accountability, and transparency.
🧩 Practice: Pick an area such as healthcare or criminal justice. Read through their website and other media. Tweet questions about how they are ensuring fairness, accountability, & transparency in their AI usage in terms of policies & Regulations
It All Rhythm in the Head of the Creative
When I get inspired it comes in written and visual form. I created this collage collection that merges Orientalist Art and emerging technologies to tell my observations visually.
Sources:
Confronting the risks of artificial intelligence, April 26, 2019, Article By Benjamin Cheatham, Kia Javanmardian and Hamid Samandari https://www.mckinsey.com/capabilities/quantumblack/our-insights/confronting-the-risks-of-artificial-intelligence
SQ10. What are the most pressing dangers of AI? https://ai100.stanford.edu/gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence-ai100-2021-1-0#_2021SQ10ref10
To predict and serve? SignificanceVolume 13, Issue 5 https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1740-9713.2016.00960.x
Philanthropy’s Techno-Solutionism Problem, https://knightfoundation.org/philanthropys-techno-solutionism-problem/
Artificial Intelligence Risk & Governance by Artificial Intelligence/Machine Learning Risk & Security Working Group (AIRS) https://aiab.wharton.upenn.edu/research/artificial-intelligence-risk-governance/
On Tyranny: Twenty Lessons from the Twentieth Century,Timothy Snyder's book