We’ve been amazed by the power, both good and bad, of technology over the past few years. It’s revolutionized culture and society in such an indelible way that often we struggle to reign it in. But no app has made our collective jaws drop like ChatGPT and led to a lot of strong reactions internally, both positive and negative. Simply put, you can look at this as the start of the AI revolution and when it went from “maybe one day” to “it’s really here.”
ChatGPT is the brainchild of the OpenAI research laboratory founded in 2015 by Elon Musk, Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba and John Schulman. You may be familiar with a few of their other products such as DALL-E, which produces images based on user text input and GPT-3, an autoregressive language model that produces human like text based on deep learning. ChatGPT is basically an amalgamation of the two and wraps it in one of the oldest widgets of the internet, the chatbot. ChatGPT can produce human-like text responses, ask followup questions, admit mistakes and challenge incorrect premises. It’s like nothing we’ve ever seen before.
Now to be fair we’ve seen other tools that “promise” results such as this, so naturally many of us were skeptical. But let’s look at how ChatGPT is different:
Years and years of deep research went into the building of ChatGPT so the foundation is strong, but closed lab testing is different than the real world. We all know that. How does it fare? Well let’s show you…
Let’s start out simple by asking ChatGPT to tell a story. Since we live and breath Salesforce, we’re going to give it a decidedly different spin. We’ll ask it “Tell us a story about how Salesforce was founded, but make the main character bears.”
So far, so good. We took a nonsensical premise and ChatGPT produced a legible, realistic although absurd story of how Salesforce was created. It got the main characters names correct (Marc Benioff), it got small details correct such as what Salesforce does and most importantly it made the story about bears.
We were impressed, but we’ve seen tools such as Jaspar.ai produce similar results, although nowhere quite as good or as human-like.
We’re going to get a little more specific for the next question and try to push one of the key capabilities of ChatGPT; the ability for that platform to write workable code. In Salesforce, you have to write formulas for buttons and fields all the time. It’s not very difficult, but if you don’t know the syntax it can get a bit confusing. Let’s see how ChatGPT fairs when we ask “Write a formula for a button in Salesforce that updates the opportunity to Closed Won when clicked”
Now at first glance, everything looks believable and more importantly logical. Beyond that, it even generates step-by-step instructions on how to use the code using the natural language. Astounding. But let’s see if these instructions make sense and if the code works.
This is where the facade started to wear away a bit. Now if you were only going to follow these instructions, you would be in a bit of trouble. The step-by-step instructions are missing some key parts such as go to Object Manager first, then to the object. But once you do find the “Buttons, Links and Actions” option and then try to create a new button, you will quickly see that there is no “Formula” dropdown for “Content Source.” Then even if you were going to use “URL” instead of “Formula”, copy over the code, add the new button to your layout (which it doesn’t tell you to do) and then test it, you would be presented with this:
What initially looks very promising, turns out to be quite broken in practice. We were disappointed but we wanted to keep testing.
For the last experiment, we want to really challenge the platform and ask ChatGPT to write an Apex script that will save user profiles to a CSV file. This is the exact moment we knew that everything really changed.
Producing a formula is interesting, but a formula can be used by numerous applications. Apex is Salesforce’s proprietary scripting language and we were shocked to see that not only did it produce something logical, but it produced something that would work.
The loop is wrong as you would need to initiate a list first, add the changed records into the list and then update outside of the loop. But for 20 records or so, this script would amazingly work.
The beauty right now of ChatGPT and OpenAI is that it gives you a tremendous jumping off point to start your code, but by no means is it the final code.
Internally we have a lot of strong opinions about ChatGPT, OpenAI and the future of AI. There are an abundant amount of moral, ethical and privacy related concerns regarding not only this platform but AI in general. First, AI such as this is full of privacy concerns - where are they sourcing this data? What are the biases or factual inaccuracies in the data sources used to train the models? Has society been contributing to this project without their knowing consent? How is it being used by advertisers to monetize consumers without them knowing?
Second, what is the ethical boundary of people generating AI content without the user knowing? Should the user know that what they are consuming is produced by AI and not a human being?
Beyond that there is the very real chance that misinformation can spread like wildfire if AI picks up on misleading facts. It’s almost a self-feeding system that if misinformation is put in then in turn, misinformation is spouted out. ChatGPT is supposed to filter out biases and untruths but we’ve seen numerous offensive examples of it failing miserably.
There is no doubt about it, ChatGPT is powerful. Both in it’s power to create complex material based on a simple input and what it portends for the future of technology. What does it mean for developers that produce scripts day in and day out? Can ChatGPT and it’s future platforms be used for harm such as creating malware on demand? The amount of questions that storm into your head as you use the platform is staggering, but the biggest question that can’t be answered right now is: what comes next?