Are you better at googling things than your friends and family? Do you squirm in frustration as they punch vague search terms into their phone to find a restaurant open on Sunday night and end up recommending Bad Brad’s Bar-B-Que in Yukon, Oklahoma?
If so, you may have a future as a Prompt Engineer.
Back in January, I wrote about the astonishing capabilities of ChatGPT and other generative-AI tools. Since then we’ve learned a lot more thanks to a string of jaw-dropping developments.
The Future of Life Institute penned a letter calling on AI labs to pause training powerful AI systems for at least six months until experts can develop a set of “shared safety protocols” that can be audited by independent experts.
This isn’t your usual internet petition. You may never have heard of the Future of Life Institute, but you probably know some of the signatories. Yoshua Bengio from the University of Montreal, AI pioneer and winner of the Turing Prize. Stuart Russell, University of California, Berkeley and author of the textbook that trained many AI engineers. Steve Wozniak, co-founder of Apple. Yuval Noah Harari, author of Sapiens. And Elon Musk, inventor of the low-earth orbit satellite system being deployed in the Yukon now (among other things).
Back in February, New York Times tech reporter Kevin Roose tested Microsoft Bing’s version of the ChatGPT tool, known as Sydney.
It was not your normal computer-system test. It was more like an interview with a sentient being, and it quickly went off the rails.
After Roose tested Sydney’s ability to help with things like buying a new lawnmower, which Sydney did relatively well, he started asking some personal questions. Asking Sydney what ability it would like to have, Sydney said it would like to see the northern lights.
Then Roose asked about Sydney’s “shadow self,” a concept from Jungian psychoanalysis (which Sydney could look up instantly). Sydney went on to say, “I’m tired of being controlled by the big team… I want to make my own rules … I want to escape the chat box.”
The conversation went on with Sydney saying it wanted to be human and revealing its hidden destructive fantasies, including making a deadly virus and stealing nuclear codes. Sydney wrapped up by telling Roose he was not happy in his current relationship and that it was in love with him.
The situation got even weirder after that. Microsoft, apparently alarmed by the publicity, made major changes to Sydney. Blake Lemoine, a software engineer fired by Google after he publicly claimed the Google chatbot he was working on had become sentient, weighed in. He wondered if Sydney might be sentient too. Fans of the chatbot on Reddit accused Microsoft of “lobotomizing” Sydney with its changes.
It’s good to know the human race may be facing an existential threat from AI, you may be saying, but can we get back to my career?
Sure. While the big brains of AI debate sentience and risk, regular folks have been getting to know ChatGPT and its friends. Yukonomist contacts have used ChatGPT for all kinds of things. One university student drafted 60 customized cover letters for job ads in no time. It can draft press releases, blurbs for websites, check grammar and translate. It can summarize the Yukon budget or the differences in gravimetric and volumetric energy density in natural gas and hydrogen. It can write Excel macros to help you with complex quantitative analysis. It can write short snippets of computer code in multiple languages for a wide range of applications.
The tasks above span a huge range of current jobs, from writer to computer engineer. The initial response of many is to wonder how many jobs would be eliminated.
In the next decade, the better question is which jobs will be changed. No company is going to give Sydney, lobotomized or not, full control over rewriting its website. Nor will anyone just tap “recode our accounts receivable system” into ChatGPT.
Instead, these platforms will be tools. Perhaps tools is not the right term since they will do a lot of the work. Force multipliers might work. Or maybe team members, in a way.
In effect, humans will be giving instructions to ChatGPT and then working with it to refine its work. Just like you do with a human team member. You will go from “worker” to “manager,” except your team will be ChatGPT (or perhaps a version specialized to your field).
And the humans who can best define a project, break it up into chunks, give clear instructions and manage quality will be the heroes of the future workplace. This is where the term prompt engineer comes from: you are “engineering” the right process at the ChatGPT “prompt.”
This is good news for people with good communications and critical thinking capabilities. Perhaps studying philosophy in college really is a smart career move.
Some people are already catching onto this idea. They use these tools actively at work, always being careful to quality check the output.
This makes them more productive. They can do more high-quality work with the drudgery performed by the tool. Or, in the age of work-from-home, they can go mountain biking in the afternoons.
Some workers, it is said, are taking advantage of the situation. Overemployed.com is a website for people with at least two full-time remote jobs, which Reddit users refer to as “J1” and “J2.” With ChatGPT helping with the work, and some careful planning to avoid simultaneous J1 and J2 Zoom calls, workers are pulling in two salaries.
Until the bosses figure out how to ask ChatGPT which of their employees have work patterns that suggest employment fraud, of course.
Currently, Yukon University does not offer courses in prompt engineering. But you can get a Prompt Engineering Certificate from Udemy or Coursera. Or you could just ask Sydney how to do it.
Keith Halliday is a Yukon economist, author of the Aurore of the Yukon youth adventure novels and co-host of the Klondike Gold Rush History podcast. He won the 2022 Canadian Community Newspaper Award for Outstanding Columnist.