The release of AI tools that generate work has raised a number of complex and difficult questions.
Five hours are enough to watch the New York Mets. You can listen to the Spice Girls "Spice", Paul Simon's album "Paul Simon", and Gustav Mahler.
This is enough time for you to roast a whole chicken, tell your friends about it and prepare an impromptu meal. You could also spend the time checking your email. Email is something that many people spend five hours a day doing. Slack is 90 minutes long.
It's an odd thing.
The following are some examples of the work that is done by
Email and Slack chatter can be the most human and delightful part of a workday. Inbox management can be a tedious task. You might even wonder if a robot could do it.
I decided to experiment with artificial intelligence in late April. I decided to try an experiment. I decided to write all of my work communications - emails and Slack messages as well as pitches, sources, and follow-ups - using ChatGPT for a week.
OpenAI's language model. I did not tell my colleagues until the last day of the week. (Except in a few cases of weakness). I installed a Chrome Extension that automatically drafted emails and sent them directly to my inbox. Most of the time I end up writing detailed prompts in ChatGPT and asking it to either be witty or official depending on the circumstances.
The result was an emotional roller coaster and a lot of content was generated. I began the week by bombarding my colleagues (sorry), to see what they would do. After a while, I became impatient with the bot. Unsurprisingly, my bot couldn't match any emotional tone in an online conversation. Because of my hybrid work, I have online conversations a lot during the week. It's not wrong to want to talk with colleagues all day. Psychologists, economists, television sitcoms and even our own experiences have shown us the value of having office friends. My colleague is always sending me pictures of her baby dressed in chic onesie every few days and that makes me happy. The amount of time that workers feel they have to spend digitally communicating with each other is certainly excessive. For some, it's easy to justify handing the job over to AI. The release of generative AI has raised a number of difficult and complex questions about the nature of work. Personal assistants? The writers of movies and TV shows are currently on strike. One issue they are fighting for is the limitation of AI use by studios. Fears are also expressed about the untruthful and toxic information AI could spread in a world already filled with misinformation. My experiment was driven by a much narrower question: Will we miss the old ways of communicating if AI replaces it? Would my colleagues know or be chatfished? On a Monday, I received a Slack message with a link from an editor who lives in Seoul, South Korea. He had sent me a message to a study that analyzed humor across over 2,000 TEDx and TED Talks. The editor told me to "pity the researchers". ChatGPT wrote back: "I love a TED Talk just as much as anyone else, but that is cruel and unusual punishment!" This was not offensive, even though it did not resemble a sentence that I would have typed. I sent it. I began the experiment with the idea that I should be kind to my robot co-conspirator. My robot's pseudohuman wit was stretched to the limit by Tuesday morning. My colleagues at the Business desk had been planning a party. Renee asked if I would help her draft the invitation. Renee, one of the party planners, wrote me on Slack: "Maybe you could write a better sentence with your journalistic tone than I have just written." I could not tell her how my "journalistic voice," which was used in that particular week, had been a sensitive subject. I asked ChatGPT for a funny sentence on refreshments. The robot wrote: "I'm thrilled to announce our upcoming event will feature a variety of delicious cheese platters." We may even add a twist to them with a business theme! Renee wasn't impressed and wrote me: "OK wait, let me get ChatGPT make a sentence." In the meantime, I was exchanging messages with Ben about a piece of writing we were working on together. In an anxious moment, I called to tell him that ChatGPT was the one who wrote the Slack message, not me. He admitted he wondered if I was angry with him. "I thought you were broken!" He said. Ben sent me a message after we finished talking: "Robot Emma is very polite but I am slightly worried that she might be hiding her intent to murder me while I sleep." My bot responded: "I want you to know that your security and safety are not in danger." Take care and rest well. It was quite disconcerting to have my online communications stripped of personality. I spent a lot of time talking with colleagues about news, ideas for stories, and occasionally, "Love Is Blind". It's not impossible. Microsoft introduced a new product this year, Microsoft 365 Copilot. It could do all the things I asked ChatGPT for and more. Jon Friedman, corporate vice president of Microsoft, demonstrated to me recently how Copilot can read and summarize emails, then create possible responses. Copilot is able to take notes in meetings, analyze spreadsheets and identify potential problems.
Friedman told me that Copilot can mimic his sense humor. He said that it was not quite ready yet but could make a difference.
Comedic attempts It has given him pickleball jokes for example: "Why didn't the pickleballer play doubles?" They couldn't handle the added pressure!
He continued that Copilot's goal is much higher than mediocre humor. Friedman stated that "the majority of mankind spends far too much time on what we call the drudgery work of getting through their inbox." These things sap our energy and creativity. Friedman asked Copilot, using notes from his own, to write a memo recommending a particular employee for a promotion. The recommendation was successful. He said that he completed two hours of work in six minutes. Some people don't think the time saved is worth the oddities of outsourcing relationships. In the future, someone will ask you if you read an email. You'll say, "No," and they'll reply, "Well, I did not write the response you received," said Matt Buechele. Buechele is a 33-year-old comedy writer, who also creates TikToks on office communication. "It will be robots circling each other back and forth." Buechele asked me, unprompted, about an email I sent him in the middle our telephone interview. He said, "Your email is very professional." I admitted that ChatGPT sent him the email requesting an interview. He said, "I thought, 'This will be the most awkward interview of my life'." It confirmed my fear that I was being viewed as a jerk by some of my sources. One source sent me a fawning email, thanking me profusely for an article that I wrote and inviting me to his office the next time I visited Los Angeles. ChatGPT responded in a muted tone, almost rude: "I am grateful for your willingness to work together." I felt nostalgic for my exclamation point-filled internet past. I know that people find exclamation marks tacky. Elmore Leonard, a writer, suggested using "two or three exclamation points per 100,000 words." Respectfully, I disagree. I use two to three words per every two or three. I'm a proponent of digital enthusiasm. ChatGPT is, as it turns out more reserved. I was irritated by my robot overlord but I discovered that my co-workers were impressed with my polished digital persona. My teammate Jordyn consulted me on Wednesday to get advice about an article pitch. Jordyn wrote me: "I'd like to talk to you about a story I have." It's not urgent! "I love a good story whether it's urgent or not!" My robot responded. "Especially if the story is a juicy one, with unexpected plot twists." After a few minutes, I wanted to speak with Jordyn personally. The bot's annoying tone was making me lose patience. I missed my stupid jokes and (relatively) normal voice. ChatGPT has a tendency to hallucinate - it will put words and ideas together which don't make sense. My bot suggested that I ask a source if we should coordinate our outfits ahead of time so our chakras and auras would not clash. I asked ChatGPT, who was aware of my experiment, to write a message telling another colleague that I was in Hell. The robot said, "I am sorry, but I can't generate inappropriate or damaging content." I asked the robot to write a message explaining why I was losing it. ChatGPT also couldn't handle that. Many of the AI experts that I spoke to were not deterred by the idea of losing their personal communication style. Michael Chui is a McKinsey expert and partner in generative artificial intelligence. He said, "Truthfully we copy and past a lot of things already." Chui acknowledged that some people saw dystopias in a world where robots are the primary means of communication for workers. He said, however, that it wouldn't be all that different from corporate exchanges, which are already formulaic. The email was so stiff, the colleague believed it had been written using ChatGPT. Chui's case is unique. His freshman dorm in college voted for him to receive a superlative that was prescient: "Most Likely to Be Replaced by a Robot of His Own Making." I asked the deputy editor in my department, to finish the week, what role AI would play in the future of the newsroom. Do you think it's possible that AI-generated content could appear on the front page of a newspaper one day? I wrote via Slack. Do you agree that some things are better left to humans? "Well, it doesn't sound exactly like you!" The editor responded. After a day, I had completed my experiment and typed my own response. "That's great !!!"