-
What I tried: Using AI to help a friend with an astronomy homework lab that turned out to be mostly algebra.
-
What I learned: AI is great at checking work, but it struggles when physical measurement and context matter.
-
How you can apply it: Let AI verify your thinking, not replace it.
A friend of mine is taking an astronomy class and reached out asking for help with their homework. My first thought was, How hard can astronomy homework be?
Turns out, pretty hard. Or at least unfamiliar.
The lab focused on parallax movement of distant stars and how you calculate distance using arcseconds and parsecs. This is important in real life in case you ever need to make the Kessel Run in 12 parsecs. Lots of formulas. Lots of steps. Very little “staring at the stars,” and a whole lot of algebra.
So, given my affinity for AI, I dropped the entire 12-page lab into ChatGPT and started feeding it questions as my friend asked them. It gave answers. Confident ones. Detailed ones.
But something felt off. The explanations weren’t actually helping my friend understand the problem, and I couldn’t quite tell why.
So I changed approach. I decided to do the homework myself.
That’s when the real lesson showed up.
There was a block of questions where everything depended on the very first answer. ChatGPT got that first answer wrong. Not because it’s “bad at math,” but because the problem required a physical measurement on paper using a ruler. AI can’t do that. It made an assumption, got the measurement wrong, and every answer after that cascaded into being wrong too.
Doing the work myself made the difference. Once I understood the problem, I could explain it clearly. And here’s where AI became genuinely useful.
Instead of asking AI to solve the problem, I asked it to check my work.
I’d calculate an answer, paste it in, and ask, “Is this correct?” It would walk through the steps and explain why. That’s how I caught the original mistake. I saw its reasoning, realized the measurement assumption was wrong, and corrected it. From there on, each answer lined up.
It’s like using a calculator. You still have to look at the result and ask, Does that number make sense? If you multiply 20 by 30 and get 0.6, your instincts should kick in.
Same with AI.
I always say: lead with your heart, finish with your brain. In this case, finishing with my brain mattered a lot. AI helped me learn astronomy math better, but only because I stayed engaged and checked the work.
This matters because AI works best as a second set of eyes, not a substitute for understanding. When you use it to verify, explain, and sanity-check your thinking, you learn more and trust the results you put out into the world.

