We invited a select group of niche publishers to discuss their questions about AI directly with Guy Tasaka.
First, the demonstrated how to create a writing co-pilot in Claude, which he now uses to help write his column. Then, he took questions from the group. Below is a summary and video clips from the roundtable.
How to create a co-pilot as a writer in Claude
First, he said, publishers need to understand the reason AI hallucinates is when you “ask it to write a story that’s never happened before.”
Putting some data or information into Claude or GPT first, that is, before the prompt, allows AI to apply the Large Language Model technology to real data
For a writing prompt, this could be the reporter’s earlier stories, notes, or early drafts.
The process is known as R.A.G., Retrieval Augmented Generation, as displayed in this diagram.
Tasaka’s own process starts in Claude’s project area. He uses the right section of the project to upload data.
Tasask has inputted ten articles in the project’s right section before he enters the prompt for Claude to write his column.
“The way that you properly get an article written is you add structured data. This whole area of your data is called RAG… It’s taking your structured data meaning a database or an Excel spreadsheet, or unstructured, meaning a Word doc or a PDF or APIs that will talk to a third party system The prompt can now use your data with your instruction set.”
The full process looks like this:
- Create a “knowledge base” in Claude’s project area, with his previous articles, notes, and draft content.
- Write a system prompt with instructions on voice, tone, and output format, and words to exclude to avoid sounding generated.
- Include a specific requests (e.g., “write an outline of an 800-word article”).
- The AI generates the column based on his knowledge base and prompts.
- He reviews, asks for clarifications, and requests edits as needed.
This is what Claude’s project area looks like; with Tasaka’s data section on the right:
Tasaka noted, “I uploaded 10 articles… Anything I write within that Projects folder will look at the writing my streams above, as well as what’s in my knowledge base.”
“How you properly get an article written is you add structured data. This whole area of your data is called RAG… It’s taking your structured data meaning a database or an Excel spreadsheet, unstructured, meaning a Word doc or a PDF or APIs that talk to a third party system and using your data, this prompt can now write, can now use your data with your instruction set.”
Keith Pepper, publisher Roughdraft: What are your thoughts around disclosures about AI for publishers, and what do you think the right balance is?
Tasaka advised to err on the side of transparency. Consider using disclaimers similar to those used for affiliate marketing. Some publications use “TLDR bullets” at the top of articles, stating “generated by AI” but always be clear about the extent of AI involvement and human oversight.
“It never hurts to be too transparent,” Tasaka said. “I think if people know that this is AI, or to the extent that it’s completely AI and no human looked at it, that’s another thing.”
Callie Carin, Publisher, Ins-compliance: I recently took over a newsletter business, so learning to use AI to save time is going to be important. When I look at the list of exclusions, wouldn’t that language already be excluded when you input the data?
Tasaka responded that AI does use other language even with style guides in place.
“These are the words that trip up the bots looking for AI-generated content.” The AI does not follow all the guidelines, some of which still need to be included in the prompts. For example, “Sometimes if you put brand names, it will exclude them, so if I am writing about a specific company, I ask it not to exclude the name.”
It will often also mispelling brand name.
“If you Google you can find a list of avoidance words,” he advised.
Jason Scott, Publisher, JCS Marketing: How do I help the editorial team feel more comfortable using AI?
One of the challenges is that AI is being used within the organization “willy-nilly.”
Employees are using it anyway but may not want to disclose it.
Tasaka recommended using a central system, such as TeamAI, to encourage sharing and transparency so that all of the experiments are in one place.
He also advised to encourage the company to see AI as an opportunity for them to improve their skill level.
“GPT is only 19 months old,” he said. “It’s not AI that’s going to take people’s jobs. It’s people using AI.
.”If your employees can get ahead of the curve if they can be a master of AI within their domain, it opens up these whole new layers of opportunity going forward for them,” Tasaka explained.
He recommends using this as the pitch to the editorial team.
“We’re going to invest in you. We’re going to teach you the tools. We will benefit, but you will ultimately benefit in the long run.”
Conclusions
ATasaka encouraged publishers to explore more advanced AI applications, such as AI agents and autonomous systems, that can perform multiple tasks, such as data collection, analysis, and content creation. Media 1.0, then 2.0 about 2005 to 20020, and 3.0 is 2020 until now.
In Media 1.0, “we won the economics of hard. We had printing presses. We had distribution. We had the infrastructure.”
Today, however, media companies “don’t have that much of an advantage over two people and a WordPress site. Now we have to figure out how to win in the economics of easy.”
The AI world, he said, is based on “speed as a competitive advantage. How well you know it. How quickly you can adopt it.”
“The tools constantly change. Keep experimenting. Keep pushing the edges.