Image of a robot counting on an abacus while sitting in a warehouse filled with jars of colored marbles

Building Better, Faster Spreadsheets With AI

I was inspired by analytics guru Adam Braff to accelerate my spreadsheet work with AI. So, I built a couple of tests using Google Sheets, Gemini, and ChatGPT. It turns out that AI can do some amazing analysis and graphing, and it can also make some profound errors…

In short, you should use AI to improve your spreadsheet productivity, but you need to write detailed prompts and carefully check the output. It’s work, but it can be high-ROI work.

Test #1: Basic Charting

For the first test case I gave each AI a Google Sheet with columns for “cumulative web visits” and “number of people.” The goal: make a pareto chart.

Gemini / Chrome made short work of delivering a bar graph and pivot table. As extra credit, Gemini also plotted a line graph with the “cumulative percentage of people.” However, the X-axis was labelled “1, 2, 3, 4, 5, 0, 6.” A single prompt put “0” back in its rightful place at the head of the line and removed the unsolicited line graph.

ChatGPT / Atlas on the other hand, really struggled. It spent a long time exploring Google Sheets and teaching itself how to make graphs. Unlike Gemini, ChatGPT’s bar graph did correctly label the X-axis “0, 1, 2…” Like Gemini, ChatGPT also inserted a cumulative percentage line graph, but gave up after calculating the first data point.

Gemini was clearly the better choice for this simple task.

Test #2: Complex Data

For the second test, I fed Gemini my time tracking spreadsheet with its 689 rows x 25 columns, and a complex structure which, in my opinion, actually looks regular and understandable to a human.

Gemini failed miserably. First, it told me that its context window, the amount of data it could remember, was only 194 rows deep x 24 columns wide. Despite repeated attempts, I could not get it to expand the context window. I then tried a shorter data set that fit within the 194 x 24 window. This time, Gemini could not figure out the data structure, despite detailed instructions.

Test #3: Advanced Visualization

For the final test, I asked Gemini to build a synthetic data set with Column A being the “date” Column B as the “number of people” and Column C the “hours of video” watched by those people on that date. Gemini did a great job populating Column A with a year’s worth of sequential dates and Columns B / C with random numbers in a specified range.

When I asked for a scatter plot Gemini told me “I am still learning and currently unable to directly insert charts or plots into the spreadsheet for you at this time” – followed by useful tips on how to do it manually.

I decided to try something simpler and asked Gemini for the mean, median, and mode of each column. Instead of delivering the values Gemini instead gave me formulas to manually type in. So, I just asked the question again. This time it actually performed the calculations correctly. 

Now that I knew that asking the same question twice could yield different results, I again asked Gemini to draw that scatter plot. This time, it worked!

To make things interesting, I asked Gemini to take on a task which is typically quite time-consuming in Excel: color the dots in different quadrants different colors. I asked for dots with values greater than the medians to be colored yellow.  It painted them red. I asked for dots below the medians to be green, it colored them blue. I finally got Gemini to color the 1stquadrant dots yellow, but in the process, it colored all the other dots red. 

Then things started to get weird. The graph’s key was consistently wrong, e.g. it said “yellow” but the dot next to it was red. I asked for commas to be inserted in the axis labels (e.g. 1000000 -> 1,000,000). Gemini confidently told me that it had inserted the commas when all it did in reality was to re-color the dots again. I asked it to change the axis labels to scientific notation (e.g. 1,000,000 -> 1E6).  Again, Gemini confidently asserted that it did that, even though it didn’t.

The Value of Advanced Reasoning Models

Thinking that I had found the outer bounds of the AI’s abilities, I stopped my tests. But after reading an article by Ethan Mollick about which AI models to use, I went back to ChatGPT / Atlas and selected “5.2-Thinking” instead of the default “5.2.” Basically, I switched from a fast default model to a deliberative reasoning model. 

What an amazing difference that made! ChatGPT 5.2-Thinking did the scatter plot right, the first time, quickly. Each quadrant had the correctly colored dots and the axis labels were right.

So, I went back to Gemini to see if I could select a better model, e.g. Gemini 3 Pro or Thinking, but Gemini told me that the model used by Gemini with Google Sheets was automatically determined by my subscription level, so I had no choice of model. 

For fun, I gave Gemini my successful ChatGPT 5.2-Thinking prompt and asked it to try one last time. Gemini promptly did the plot, labelling every yellow dot as green… Perhaps someday I will pay for the more advanced Gemini models!

Key Learnings

To me, it feels like “AI for spreadsheets” is where “AI for text” was a year or two ago. Namely:

  • Use AI on spreadsheets if you have narrow, well-defined tasks.
  • Subscribe to your favorite AI platform to get access to the better reasoning models.
  • Always verify what the AI tells you.

Spreadsheet analysis with AI is still work, but it’s worth it!

AI Content Statement

All text in this post is human-generated, written by me. The image was generated by ChatGPT-5.2 via my Custom GPT “Robot Image Generator.”