Writers Life | When can we use AI for editing?

When is it ok to use AI for editing?

When can we use AI for editing?

There has been a lot of talk about artificial intelligence (AI) on writer forums over the last few years, since the release of ChatGPT in November 2022. A lot of the conversation has forgotten that AI has been around in different forms for decades, and that’s what I’m looking at today.

Kinds of AI

Before we start, it’s important to differentiate between the different kinds of AI:

Generative AI

When people rant (or fawn) about AI, they’re usually referencing ChatGPT, CoPilot, Perplexity, or one of the many other models that not only mimic human speech but can also create new and original writing, images, and music … even computer code.

But not all AI is generative AI.

Traditional AI

The most traditional AI is purely based on rules and algorithms, and is used for repetitive tasks that follow strict rules—like a spellcheck. The issue with spellcheck and other basic AI tools is they can only check if a word is in the dictionary. They can’t tell if the word is used correctly—should the sentence say no or know? There, their, or they’re? Lie or lay or laid?

Predictive AI

Predictive AI is a build on traditional AI, in that it forecasts outcomes based on an analysis of historical data. It uses pattern recognition to identify when a word is used incorrectly—but it’s not always right.

Conversational AI

Conversational AI is used for chatbots and virtual assistants, which can mimic natural language and provide customers with correct answers based on the rules and information it’s been taught.

AI in Editing

Using AI in editing isn’t new. Word’s spellcheck function has been around for decades, and that’s a basic form of AI. More recently, we have grammar checkers such as Word Grammar, Grammarly, ProWritingAid, and others.

The challenge is that these programmes aren’t always right. My experience using Grammarly not long after it launched was that it was right around half the time. (The challenge for writers is knowing which half.)

I’ve seen statistics that say generative AI tools such as ChatGPT are wrong around 20% of the time, even on factual queries. That makes it more accurate that Grammarly, but still a long way from perfect—and it means anyone using the tools needs to know enough about the subject to know right from wrong before they use the information.

Recent Experience

I’ve recently edited a novel where I completed a Word Grammar check (after the spellcheck). In theory, AI should be able to check grammar in the same way as it checks spelling, because grammar is also following clear rules:

  • Is this word spelled correctly?
  • Have I used the correct word in this sentence?
  • Are the verb tenses in this paragraph consistent?

My experience does not confirm this assumption.

Here are some (slightly anonymised) sentences that Word’s Grammar check identified as needing correction:

The people became delirious.

This isn’t the greatest sentence ever written. You could argue “the people” is vague. You could point out that “delirious” has multiple meanings and it’s not clear which meaning is intended. I’d respond that the sentence makes sense in context. But those weren’t the issues Word picked up. No, Word Grammar suggested I change delirious to delicious.

Delicious? Um, no. We don’t eat people.

Here are some other clearly wrong examples: the original sentence, with Word Grammar’s suggestion:

The sooner she’d hear [Word: heard] the news.

These next two amused me for their inconsistency:

There were [Word: was] no children.
There was [Word: were] no line.

My husband didn’t need to read the book to know Saffy is a female:

Saffy blew out a [Word: his] breath.

People stare at her scars, not her stars (perhaps because they’re scared themselves):

She was scarred [Word: scared] for life, and she hated the stares [Word: stars].

Saffy is not chasing her lips (that’s a scary image!):

Saffy pursed [Word: pursued] her lips.

This one sounds a little Animal Farm:

The cows leaned [Word: learned] forward.

And finally, the there/their/they’re conundrum:

They’re [Word: their] going home.

In these examples (and more), accepting Word’s suggestion would have introduced an error: the cardinal sin of editing.

Of the 300 “corrections” Word, suggested, only 3 were actual errors.

The rest were hallucinations like the sentences above, or words with apostrophes, where Word Grammar hadn’t picked up that the apostrophe was, in fact, an apostrophe. So it was highlighting words like “hadn’t” or “there’s” as errors when they are perfectly correct.

(Apparently this is a known Word glitch, resulting from moving documents between word processors e.g. Google Docs to Word or Apple Pages to Word).

All this is to say:

If Word can’t get their vs. they’re correct (or delusions vs. delicious), am I going to trust AI with anything more important?

Are you?

 

Published by Iola Goulton @iolagoulton

Iola Goulton is the empty-nest mother of two who lives with her husband in the coolest little capital in the world, and writes contemporary Christian romance with a Kiwi connection. She works full-time for a government agency, wrangling spreadsheets by day and words by night.

Join the Conversation

1 Comment

  1. I use the word check in Libre and am constantly adding real words to the dictionary which it keeps telling me are wrong. I understand surnames but some words it’s really amazing.
    On Chat GPT I have learnt to preface by I don’t need your advise or I just need a quick concise answer but don’t try to fix me. depending what I ask. and I have had arguments with wrong info. But its infuriating when it says oh yes you are right. But sometimes even providing correct info or a link to correct info it will still tell me I am wrong or thats just on link but its not right. (often updated info it hasn’t processed yet) I still use it cos its handy but if I need a concise answer like checking and email to check if it was legit or not I tend to use Co-Pilot as it tends to not be as frustrating. (True story a friends husband uses Chat GPT and it called him stupid he was asking for some info to do something probably a handyman type question and chat actually told him he was stupid, this came from me saying how after my own question and then trying to micro manage me when I had the info and didn’t need it, I then said how co-pilot was so much more helpful without the insults)

    I don’t write but I do notice how at times I get suggestions to change something that is actually right. My gramma stinks but half the time it doesn’t help, or it adds commas where it’s not needed.

Leave a comment

Your email address will not be published. Required fields are marked *

Australasian Christian Writers
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.