Years ago, I read Mortimer J. Adler’s book “How to Read a book” and continue to be fascinated by the variety of ways one can approach a book. It’s not astonishing that it added a reason to be skeptical of the idea to prefer summaries over books.
The relevant question seems to be what summaries allow to do or what they are for. And as there rarely is only one way to use something, what it is that you would want to have summaries for?
Adler suggests that there are three approaches to reading a book: a structural, interpretive, and critical stage. Taking this into consideration, a summary of a book will depend on the approach a reader has had to that book but also on the number of times he has read that book.
This isn’t only about the in-depth knowledge of what is in the book, it is also a question of what type of understanding one seeks to have of the matters described in the book. It also is to what extent a reader dares to engage with what the book is about and what it could change. People, for example, tend to prefer books that confirm their point of view. Connecting with another point of view they may sense a risk of losing their existing idea. Maybe not consciously, but possibly through habitual avoidance.
ChatGPT raises a similar question, that is, what is ChatGPT’s reading stage? Can it deliver an interpretive or critical stage of reading? Or can it transform the data it is using into one that teaches a five-year-old? What does it do, when that information has not yet been written?
Just like summaries, ChatGPT provides data. And just like with summaries, there needs to be a reason why one uses ChatGPT’s output.
And it may be important to remind oneself, that there are also ethical reasons that accompany using a summary instead of a book, or ChatGPT instead of one’s knowledge, experience, or expertise.