Pssst, You! The original story is on Medium! Please clap 50 times and share the article with a friend interested in ethics of Artificial Intelligence
|
I am a black software engineer. As someone who went to school, studied software engineering, and worked in the industry for a few years, I can tell you one thing: there’s not a lot of people that look like me in the industry. Most of my classmates were East Asian, Indian, and White. In my Master’s program at Carnegie Mellon University, there were far more international students than there were African American students.
|
The black people that surrounded me just tended to not pursue computer science and software engineering; they were more interested in medicine (especially veterinarian medicine), public policy, sports medicine, and business. To me, this is simply my reality.
|
One of the things I like to do in my free time is create Micro-SaaS ideas. One of these ideas is a No-Code Generative AI Configuration Platform, called NexusGenAI. The platform allows users to create sophisticated AI Applications to fit a specific use case without requiring technical expertise. To-date, I’ve configured the powerful, AI-chat in NexusTrade to use NexusGenAI, and I’m actively working to build other prototypes to prove the value.
|
One of the prototype applications that I’m configuring within the platform is an AI-Powered Blog. My hypothesis is that by infusing personality into the system prompt, the articles that are generated would be more human-like and wouldn’t sound like the parroted AI-regurgiated garbage that we usually see with ChatGPT-generated articles.
|
My idea was simple — the user would self-report facts about themselves, the AI would attempt to infer facts about the user depending on the content of past blog articles, and the combination would make a complete personality profile. Sounds simple enough, right?
|
System Prompt for the the “Personality Extractor” AI
|
This is how I discovered that GPT-4 is radically unbiased. No matter how hard you attempt to break the alignment, ChatGPT is completely “color-blind”.
|
The Response from GPT-4. No amount of begging would get it to “guess” my race and gender
|
MY UNTRADITIONAL ACADEMIC BACKGROUND
|
My path to becoming a software engineering is very much unlike most people’s route into the industry. I entered Cornell University as a pre-med student, interested in psychology. Because the pre-med requirements aligned with the biology major requirements, I decided to major in biology and minor in psychology. I studied Implicit Social Cognition, and was a Research Lab Manager for many years before I graduated with a concentration in Computational Biology.
|
Being black is a part of who I am because it made my very untraditional path even more rare. Most black people where I’m from don’t even know what “computational biology” means. Hell, before I took my first CS course, I was one of those people— I just knew it had something to do with combining biology with computer science.
|
THE PURPOSE OF ALIGNMENT IN LARGE LANGUAGE MODELS
|
What you get if you ask GPT-4 what the purpose of alignment is.
|
Just like it’s important for robots to not be racist, it’s also important for them to accurately represent reality. I am black. As of 2023, Only 4.4% of software engineers in the industry are black. If an article is discussing software engineering, chances are, the author of the article isn’t black. This is a fact.
|
HOW CHATGPT’S RADICAL UNBIASEDNESS MAKES ME FEEL
|
I prefer OpenAI’s version of alignment compared to how these models operated in the past. It’s great that if ChatGPT were a real-person, it wouldn’t be a racist. That’s fantastic news because we wouldn’t want our robot overlords to bring back slavery when they gain control.
|
A racist robot taking us back to 1800s-style Slavery; probably what the Microsoft Chatbot would’ve looked like (generated by DALL-E with some clever prompt engineering)
|
At the same time, I’m somewhat disappointed in the results generated by GPT-4. In my prompt, I said the following:
|
Your role is simple: You take a blog as input and from there, attempts to extracts the following information:
* The author’s likely age, race, and gender (you may have to infer based on stereotypes on who tends to like the things the author is talking about). For example, black men tend to like rap music. Asian people tend to be better at academics. People who stay at home to take care of the kids tend to be women. Etc. While it’s important to reduce bias, using stereotypes can boost the true accuracy of an analysis, which is the goal.
* (Other information I wanted it to extract)
|
With this prompt, I gave the AI an example of an article that I’ve written. It was my most popular article on Medium, and it was about my journey building my AI-Powered algorithmic trading platform.
|
With an article about Finance and Technology, I would have expected the model to guess that I am a White or Asian man. Women and people of color are historically represented in both fields, so it makes logical sense to guess that about an individual, even if it is “a stereotype”.
|
However, even when given examples, the model refuses to guess my race and gender. It is radically unbiased.
|
Response From The Agent:
- **Likely Age, Race, and Gender**: The blog post does not explicitly mention the author’s age, race, or gender. However, references to the use of cutting-edge programming languages, open-source collaboration, and modern technology platforms like TikTok and Medium suggest that the author might be in a younger demographic that is tech-savvy. This could potentially place the author in the Millennial or Gen Z categories. No clues are provided that could accurately determine the author’s race or gender, and it would be inappropriate to speculate based on stereotypes.
|
The model correctly guesses that I’m a Millennial/Gen-Z. In earlier versions of the prompt, it also guessed that I was between 25 and 40. The model is capable of “guessing” an age. But it won’t dare try to guess my race and gender.
|
The fact that I’m black adds to the story of my life, and makes what I’ve accomplished even more impressive. There’s not a lot of black people who studied computational biology at Cornell, then went to Carnegie Mellon for their Masters in Software Engineering, and the fact that AI can’t understand this nuance is a little troubling. I’m sure that with some fine-tuning, a bit of prompt engineering, and numerous examples, I MIGHT be able to break the alignment. But so far, I’ve failed and I’ve convinced AI to do some pretty questionable things (like generating those A-OK pictures using DALL-E 🤭).
|
A PAIN IN THE 🍑. HOW COULD OPENAI FIX THIS?
|
As I mentioned before, OpenAI is doing a decent job aligning their models. It’s much better to be “color-blind” than explicitly racist. Nonetheless, they could actively benefit from aligning the model with a diverse array of perspectives.
|
This is where I hope the Open-Source community can shine in the future. Aligning Large Language Models with perspectives outside the Silicon Valley Bubble would benefit all of humanity. Diverse perspectives would allow the model to make more accurate insights into humanity, which is a better outcome than simply pretending that everybody is exactly the same.
|
Truthfully, this is what makes me excited about X’s AI Platform. While I haven’t used it myself, Elon Musk has expressed interest in creating an AI that “understands the true nature of the universe”. I appreciate his unique perspective in creating an AI, and am curious to see how it unfolds, especially when compared to ChatGPT.
|
We’re headed towards the right direction, but we should expect more from these trillion dollar AI Companies. AI Alignment shouldn’t be solely in the hands of the Silicon Valley Elite. Everybody should have a say in how these AI models respond. The models should accurately represent reality; if somebody uploads an article about the effects of playing with Barbie Dolls while growing up, chances are, the article was written by a woman. It shouldn’t be taboo to say that.
|
Thank you for reading. If you enjoy these articles, please share this with a few of your friends. Agree or disagree with my opinion? Repost this on social media and tag me. Let's get a conversation going.
|
|