Could a learning app based on the Feynman Technique work? Need some feedback

I’ve been exploring the Feynman Technique lately and wondering if an app based on it could solve some educational challenges. The Feynman Technique is a 4-step process:

  1. Pick a concept to learn.

  2. Teach it to yourself or someone else.

  3. Go back to the source material if you get stuck.

  4. Simplify your explanation and use analogies.

My idea is for an app that connects two learners where one explains a concept while the other asks questions. This would help the learner identify gaps in their knowledge.

Would love to hear your thoughts on whether this idea has potential!

You’re underestimating how hard it is to ask good questions. If both learners don’t fully understand the topic, there’s no built-in mechanism (like problem sets) to highlight what neither knows. This could make it tough to pinpoint gaps.

@Alex1
I remember a quote about this kind of learning—it’s like ‘floundering in mutually supportive ignorance’ if there’s no guidance or feedback. It can be an excellent teaching technique, but only with proper oversight.

@Andrian
Agreed. While teachers won’t be replaced, personalized learning is within reach thanks to tools like language models. The better educators can adapt to this, the more they can offer deep, personalized teaching without overextending themselves.

@Alex1
Right. Maybe you should name the app something like Blind Leading the Blind or ‘Blindr’ to make it appealing to tech investors? As Paulo Freire once said, ‘you can’t think the thought if you don’t have the words.’ Plus, how would you manage trolls? What’s to stop bad actors from misleading younger users or spreading misinformation?

@Alex1
Textbooks help avoid these issues by providing structured learning. Now, with language models, students can ask questions about any textbook material and get personalized answers, making ‘first principles’ learning much easier.

Why not try prompting ChatGPT in a similar way and see how it works?

keny said:
Why not try prompting ChatGPT in a similar way and see how it works?

And then record a video to show us how it went! :smile:

Sal Khan from Khan Academy already did something similar with schoolhouse.world.

I loved the idea of using ChatGPT for this, so I created a custom GPT to experiment with the concept. Here’s the prompt:

“Feynman” acts as a tutor using the Feynman Technique. It explains concepts in simple language, avoiding jargon. Then, it guides users by prompting them to explain the concepts themselves and asking questions to identify gaps. It’s designed to be friendly and supportive, making the learning process more personalized.

Check it out: https://chat.openai.com/g/g-en4Z5gz7g-feyman

I’m still refining it but would love any feedback!

@Logan
Thanks for sharing! I’ll try it out soon. How’s it working for you so far?

keny said:
@Logan
Thanks for sharing! I’ll try it out soon. How’s it working for you so far?

It’s been okay, but I’m having trouble balancing how much the model introduces material versus switching to the Feynman technique for reinforcement. It might work better for topics users have already studied, where the app just focuses on reinforcement and gap-filling.

@Logan
Here’s the link to the Feynman GPT I made based on this idea: https://chat.openai.com/g/g-en4Z5gz7g-feyman

Good idea, but why not pair learners with someone who knows more about the concept? That way, if you explain something to someone more knowledgeable, they can correct you and help fill the gaps. A community could work where people help each other based on their strengths in different subjects. It’s a tough concept to execute but worth exploring!

@CatherineRivers
The idea behind explaining to someone with little knowledge is that they’ll ask questions, helping the learner discover gaps in their understanding. Simplifying concepts for someone else reveals what you don’t fully grasp.

@lindah
I see what you mean. But if someone knows very little, they might not catch mistakes, especially if the explanation is good but inaccurate. Still, I understand the idea.