Several weeks ago, I spent about a week fully reverse engineering a Stereomaker pedal. It accepts a mono signal and produces a stereo field using a 5-stage all-pass filter to mess with the phase without the use of delay (which sounds cheesy and creates a result that doesn't mix well back to mono).
I've not really worked with audio circuits previously, and I'd been intimidated to approach the domain. My journey was radically expedited by iterating through the entire process with a ChatGPT instance. I would share zoomed photos, grill it about how audio transformers work, got it to patiently explain JFET soft-switching using an inverter until the pattern was forced into my goopy brain.
Through the process of exploring every node of this circuit, I learned about configurable ground lifts, using a diode bridge to extract the desired voltage rail polarity, how to safely handle both TS and TRS cables with a transformer, that transformer outputs are 180 degrees out of phase, how to add a switch that will attenuate 10dB off a signal to switch line/instrument levels.
Eventually I transitioned from sharing PCB photos to implementing my own take on the cascade design in KiCAD, at which point I was copying and pasting chunks of netlist and reasoning about capacitor values with it.
In short, I gave myself a self-directed college-level intensive in about a week and since that's not generally a thing IRL, it's reasonable to conclude that I wouldn't have ever moved this from a "some day" to something I now understand deeply in the past tense without the ability to shamelessly interrogate an LLM at all hours of the day/night, on my schedule.
LLMs absolutely let you explore ideas and areas you wouldn't have otherwise...but does your new design actually _work_?
I'm curious whether the "knowledge" you gained was real or hallucinatory. I've been using LLMs this way myself, but I worry I'm contaminating my memory with false information.
This is a phenomenal example of exactly what I am advocating.
Notice you didn't ask the AI to 'just design a stereo pedal for me.' You interrogated it, reasoned about netlists, and forced the concepts into your brain through intense friction. That is pure deep work.
I have a nearly total opposite take. I can't tell you how many times I've read a book, a paper or something else and been confused by some ambiguity in the author's prose. Being able to drop the paper (or even the book!) into an LLM to dig into the precise meaning has been an unbelievable boost for me.
Now I can actually get beyond conceptual misunderstanding or even ignorance and get to practice, which is how skills actually develop, in a much more streamlined way.
The key is to use the tool with discipline, by going into it with a few inviolable rules. I have a couple in my list, now: embrace Popperian falsifiability; embrace Bertrand Russell's statement: “Everything is vague to a degree you do not realize till you have tried to make it precise.”
LLMs have become excellent teachers for me as a result.
Right question to ask, however, good readers/professionals do have some sense for this and ability to dig further as needed. On the other hand, books and articles are often over-detailed, with the key stuff buried in the lede or even remaining tacit.
For me, LLMs have often pointed me to answers or given food for thought that even subject matter experts could not. I do not take those answers at face value, but the net result is still better than the search remaining open-ended.
We actually don't disagree at all—you are perfectly illustrating my point.
Applying strict epistemic discipline (Popper, Russell) to resolve ambiguity and accelerate actual practice is the very definition of deep work. You aren't using AI as a shortcut to skip thinking; you're using it as a Socratic sparring partner to deepen it. This is exactly the paradigm shift I'm advocating for.
I’m strongly reminded of early google every time I use AI for research. I used to be able to know little about a topic, try to search on it and get shit results. But, google would give me pages of results. So I could skim a lot and eventually on page 10, I stumble across some term of art, and that term would greatly improve my search. Rinse and repeat, and I’d have a good sense about the topic I was interested in.
You can’t really do that with google anymore, and I can’t remember the last time I bothered to actually learn something that wasn’t trivial from google. ChatGPT, however, has been a game changer. I can ask a really dumb question and get some basic info about the thing I’m asking about, and while it’s often not quite what I’m looking for, it gives me clues to follow, and I can quickly zero in on what I’m looking for, often in new contexts.
As an autodidact who’s main motivation to go to college was to get access to the stacks and direct internet access, I can’t even begin to tell you how game changing LLMs seem to be for learning.
To your point though, my concern is we don’t know how to teach how to learn, and LLMs will likely seduce many into bad behavior and poor research hygiene. I treat my research the same way I attack the stacks, but take someone who’s never been to a research library and ask them to create a report on some topic, and just why? That is the basic resistance, why?, why do what an LLM is almost literally built to do. Yet that is also highly related to individual learning, to take a bunch of disperate sources and synthesize output related to the input.
I suspect we’ll learn how to use LLMs in the same way we learned how to use calculators. But I have no doubt that on average (or maybe median or mode?) calculators have made us less capable to do basic arithmetic, and I suspect LLMs will also cause a great percentage of the population to be worse at sythesizing information. I’d hope that it’s only the same people who would have otherwise only gotten their information from TV, but I do have a slight fear it will creep past that subsection of the population.
Thank you for this helpful differentiation. I agree - and if it‘s undermining our trust into ‚effort‘ (we start to be suspicious about how much some piece of work is really ‚worth‘), it undermines also our relationships.
Maybe for you reading a paper deeply is the most constructive way that you have to absorb information.
For me, it is having a document and interrogating it. Maybe having many sets of documents about a whole category of information. Getting the bullet points. getting the high level and then interrogating and digging down and being able to get bubbled up information as I need it.
That is the learning style that matches how I learn.
I have never been able to skim, so reading a large document WILL teach me that topic, but getting through that doc is tough.
I can dump a very large set of docs in a reader that lets me interrogate the whole data set and I can fly through looking for what is interesting to me, and what I may need, and along the way I will likely dive into other parts too. Asking questions keeps my hyperfocus active.
I think it is just a different style. I have synesthesia and a hard time not working on three to five things at once. I am use to knowing I learn differently than others.
The issue, in my experience, is that there is a lot of productive work that does not look productive at first glance. Long term work may not look productive for years until it suddenly is tremendously productive. And there is a lot of quiet and often thankless maintenance work that goes on largely unnoticed that helps others do their jobs well. Both have value despite superficially looking unproductive at times. I'd argue that both look productive at long time scales but unproductive at short time scales.
Getting your directions from Google Maps might make you seem more knowledgeable about a city's geography than you actually are.
However, what does it mean to say that's deceptive? It means you care more about social signalling than you do about arriving at the right destination on time. Showing that you're not the sort of person who gets lost isn't really the primary reason people use Google Maps. When it's not a test of your navigation skills, it's not cheating.
Similarly, doing Google searches before posting might be "deceptive" in that it makes you seem more knowledgeable than you are, but on the whole I would prefer more knowledgeable posts, so the social signalling seems like a secondary consideration.
Similarly for using AI. Sometimes it's just a way to get more information.
But to actually answer the question:
I’ve been putting research paper pdfs in notebook llm , and turning them into ~40 minute podcasts which I listen to on my walks.
Yes it’s shallow learning, and it might have some hallucinations in there but I wouldn’t have read some of those otherwise.
I think the risk is this; when non-technical users who've never shipped software in their life can dictate to a machine and get "instant results" it going to bring back managers not understanding that you don't just ship code. Especially these days where one bad dependency can mean downtime or worse.
This was the issue with some the ads Apple was running when launching the iPhone 16. It showed the worst worker using Apple Intelligence to impress the boss and get promotions, which being generally lazy and terrible. I felt it was the wrong message to send. [0]
I don’t think AI is all bad for summaries though. I used to add stuff to a reading list with good intentions, but things went there to die. Hundreds of articles added, but with so much new content each day, I would never actually read any of it. Now, I use AI summaries to get more context on what the article is. If it sounds interesting and I want more info, I can read the whole thing in the moment. If I’m satisfied with the summary alone, I can move on with my life. No more pushing it off to a reading list that only generates guilt. I actually end up reading more articles due to this, not less.
Agreed. LLMs have helped me achieve much deeper reading, _when directed to do so_. Asking an LLM to “Teach me Socratically about this paper/code. One question at a time”, usually allows me to get a much deeper reading of the material than I would otherwise.
That's a different take than I've been considering AI to be genuinely useful. I try to not use it for deep work, infact I try to use it minimally but frequently for short checks on my own understanding.
Using your research paper reading example, I would read the research paper, but then ask an AI tool specific questions about the work, frequently in new chats. Then at the end I might ask it to implement my description of the paper. I guess it's your 'debate with me' conclusion, the only difference is I would try to have multiple short conversations.
Ironically, what you described is exactly using AI to help with deep work. You do the heavy lifting (reading), and use AI strictly for stateless verification and testing your mental model. That is the ideal synergy.
I have some algorithms I absolutely must know. So I’m hand coding them and asking the agent to critique me.
I do a very similar thing in writing - I need feedback, don’t rewrite this!
In both cases I need the struggle of editing / failing to arrive at a deeper understanding.
The future dev will need to know when to hand code vs when to not waste your time. And the advantage will still go to the person willing to experience struggle to understand what they need to.
I find value in learning some things deeply but not all things.
The ability to be more selective about where I attend deeply, while leveraging fast shallow learning to complete other tasks... That seems like a potential benefit and a nice choice to have in the toolbox.
I'm sick of hearing about AI, but I'm significantly more sick of anyone who knows how to write English prose at a level higher than "typical rural American" being accused of using AI to write.
It doesn't have to be. Comments such as yours add nothing to the conversation. It's an ad hominem attack. In the absence of explaining why you believe it "looks like AI", it's a baseless accusation
I don't think it's all that bad. There's definitely vibe coding that is "copy paste / throw away" programming on ultra steroids. But after vibe coding two products and then finding them essentially impossible to then actually get to a quality bar I considered ready to launch, I've been working on a more measured approach that leverages AI but in a way that simply speeds up traditional programming. I use it to save tons of time on "why is pylance mad about X" "X works from the docs example but my slightly modified X gives error Y" "how do I make a toggle switch in css and html" "how am I supposed to do Python context managers in 2026 (I didn't know about the generator wrapper thing)" all that bullshit that constantly slows you down but needs to be right . AI is great at helping you kickstart and then keeping you unblocked.
I've been using Gemini chat for this, and specifically only giving it my code via copy paste. This sounds Luddite but actually it's been pretty interesting. I can show it my couple "core" library files and then ask it to do the next thing. I can inspect the output and retool it to my satisfaction, then slot it in to my program, or use it as an example to then hand code it.
This very intentional "me being the bridge" between AI and the code has helped so much in getting speed out of AI but then not letting it go insane and write a ton of slop.
And not to toot my own horn too much, but I think AI accelerates people more the wider their expertise is even if it's not incredibly deep. Eg I know enough CSS to spot slop and correct mistakes and verify the output. But I HATE writing CSS. So the AI and I pair really well there and my UIs look way better than they ever have.
Pure 'vibe coding' is essentially technical 'tittytainment'. Using AI for the horizontal spread while you enforce vertical architectural depth is true deep work.
What’s important? That bridges get built and stay up, or that they’re built only after toiling X amounts of hours. AI will change the nature of work, it’s going to make a lot of people uncomfortable. But more importantly, it’s going to let people who understand things faster get the info they need to be productive.
I have a feeling we would all be terrified if we knew how much AI had a role in building bridges at the moment.
TBD if they stay up, I suppose.
The stories I hear from various white collar professions not related to tech are... interesting, to say the least. There is a whole lot of unsanctioned shadow IT going on regardless of policy.
Several weeks ago, I spent about a week fully reverse engineering a Stereomaker pedal. It accepts a mono signal and produces a stereo field using a 5-stage all-pass filter to mess with the phase without the use of delay (which sounds cheesy and creates a result that doesn't mix well back to mono).
I've not really worked with audio circuits previously, and I'd been intimidated to approach the domain. My journey was radically expedited by iterating through the entire process with a ChatGPT instance. I would share zoomed photos, grill it about how audio transformers work, got it to patiently explain JFET soft-switching using an inverter until the pattern was forced into my goopy brain.
Through the process of exploring every node of this circuit, I learned about configurable ground lifts, using a diode bridge to extract the desired voltage rail polarity, how to safely handle both TS and TRS cables with a transformer, that transformer outputs are 180 degrees out of phase, how to add a switch that will attenuate 10dB off a signal to switch line/instrument levels.
Eventually I transitioned from sharing PCB photos to implementing my own take on the cascade design in KiCAD, at which point I was copying and pasting chunks of netlist and reasoning about capacitor values with it.
In short, I gave myself a self-directed college-level intensive in about a week and since that's not generally a thing IRL, it's reasonable to conclude that I wouldn't have ever moved this from a "some day" to something I now understand deeply in the past tense without the ability to shamelessly interrogate an LLM at all hours of the day/night, on my schedule.
If you're lazy, perhaps you're just... lazy?
Anyhow, I highly recommend the Surfy Industries Stereomaker. It's amazing at what it does. https://www.surfyindustries.com/stereomaker
LLMs absolutely let you explore ideas and areas you wouldn't have otherwise...but does your new design actually _work_?
I'm curious whether the "knowledge" you gained was real or hallucinatory. I've been using LLMs this way myself, but I worry I'm contaminating my memory with false information.
This is a phenomenal example of exactly what I am advocating.
Notice you didn't ask the AI to 'just design a stereo pedal for me.' You interrogated it, reasoned about netlists, and forced the concepts into your brain through intense friction. That is pure deep work.
I have a nearly total opposite take. I can't tell you how many times I've read a book, a paper or something else and been confused by some ambiguity in the author's prose. Being able to drop the paper (or even the book!) into an LLM to dig into the precise meaning has been an unbelievable boost for me.
Now I can actually get beyond conceptual misunderstanding or even ignorance and get to practice, which is how skills actually develop, in a much more streamlined way.
The key is to use the tool with discipline, by going into it with a few inviolable rules. I have a couple in my list, now: embrace Popperian falsifiability; embrace Bertrand Russell's statement: “Everything is vague to a degree you do not realize till you have tried to make it precise.”
LLMs have become excellent teachers for me as a result.
If you're not sure what something is saying, how can you be sure that the AI had picked the correct interpretation?
Right question to ask, however, good readers/professionals do have some sense for this and ability to dig further as needed. On the other hand, books and articles are often over-detailed, with the key stuff buried in the lede or even remaining tacit.
For me, LLMs have often pointed me to answers or given food for thought that even subject matter experts could not. I do not take those answers at face value, but the net result is still better than the search remaining open-ended.
We actually don't disagree at all—you are perfectly illustrating my point.
Applying strict epistemic discipline (Popper, Russell) to resolve ambiguity and accelerate actual practice is the very definition of deep work. You aren't using AI as a shortcut to skip thinking; you're using it as a Socratic sparring partner to deepen it. This is exactly the paradigm shift I'm advocating for.
I’m strongly reminded of early google every time I use AI for research. I used to be able to know little about a topic, try to search on it and get shit results. But, google would give me pages of results. So I could skim a lot and eventually on page 10, I stumble across some term of art, and that term would greatly improve my search. Rinse and repeat, and I’d have a good sense about the topic I was interested in.
You can’t really do that with google anymore, and I can’t remember the last time I bothered to actually learn something that wasn’t trivial from google. ChatGPT, however, has been a game changer. I can ask a really dumb question and get some basic info about the thing I’m asking about, and while it’s often not quite what I’m looking for, it gives me clues to follow, and I can quickly zero in on what I’m looking for, often in new contexts.
As an autodidact who’s main motivation to go to college was to get access to the stacks and direct internet access, I can’t even begin to tell you how game changing LLMs seem to be for learning.
To your point though, my concern is we don’t know how to teach how to learn, and LLMs will likely seduce many into bad behavior and poor research hygiene. I treat my research the same way I attack the stacks, but take someone who’s never been to a research library and ask them to create a report on some topic, and just why? That is the basic resistance, why?, why do what an LLM is almost literally built to do. Yet that is also highly related to individual learning, to take a bunch of disperate sources and synthesize output related to the input.
I suspect we’ll learn how to use LLMs in the same way we learned how to use calculators. But I have no doubt that on average (or maybe median or mode?) calculators have made us less capable to do basic arithmetic, and I suspect LLMs will also cause a great percentage of the population to be worse at sythesizing information. I’d hope that it’s only the same people who would have otherwise only gotten their information from TV, but I do have a slight fear it will creep past that subsection of the population.
Thank you for this helpful differentiation. I agree - and if it‘s undermining our trust into ‚effort‘ (we start to be suspicious about how much some piece of work is really ‚worth‘), it undermines also our relationships.
A good example is ‚birthday wishes‘:
https://m.youtube.com/watch?v=2IYqhdJuRfU&t=5m47s
(AutoCorrect, AutoComplete - generate? AutoCongratulate? How much is ‚okay‘?)
Maybe for you reading a paper deeply is the most constructive way that you have to absorb information.
For me, it is having a document and interrogating it. Maybe having many sets of documents about a whole category of information. Getting the bullet points. getting the high level and then interrogating and digging down and being able to get bubbled up information as I need it.
That is the learning style that matches how I learn.
I have never been able to skim, so reading a large document WILL teach me that topic, but getting through that doc is tough.
I can dump a very large set of docs in a reader that lets me interrogate the whole data set and I can fly through looking for what is interesting to me, and what I may need, and along the way I will likely dive into other parts too. Asking questions keeps my hyperfocus active.
I think it is just a different style. I have synesthesia and a hard time not working on three to five things at once. I am use to knowing I learn differently than others.
I'm convinced that at some point looking like being productive and being productive becomes the same thing.
The issue, in my experience, is that there is a lot of productive work that does not look productive at first glance. Long term work may not look productive for years until it suddenly is tremendously productive. And there is a lot of quiet and often thankless maintenance work that goes on largely unnoticed that helps others do their jobs well. Both have value despite superficially looking unproductive at times. I'd argue that both look productive at long time scales but unproductive at short time scales.
There's a point where they meet, but "faking it until you make it" doesn't work for productivity in the same way it doesn't work for getting rich.
But there's a secret: just buy my $399 masterclass and I'll teach you 17 simple productivity hacks to 100x your income.
So what is important is not that 10 or 20 times the work can be done, but that you are stressed out and exhausted while doing your work?
[dead]
Getting your directions from Google Maps might make you seem more knowledgeable about a city's geography than you actually are.
However, what does it mean to say that's deceptive? It means you care more about social signalling than you do about arriving at the right destination on time. Showing that you're not the sort of person who gets lost isn't really the primary reason people use Google Maps. When it's not a test of your navigation skills, it's not cheating.
Similarly, doing Google searches before posting might be "deceptive" in that it makes you seem more knowledgeable than you are, but on the whole I would prefer more knowledgeable posts, so the social signalling seems like a secondary consideration.
Similarly for using AI. Sometimes it's just a way to get more information.
Does this post feel AI generated to anyone else ?
But to actually answer the question: I’ve been putting research paper pdfs in notebook llm , and turning them into ~40 minute podcasts which I listen to on my walks. Yes it’s shallow learning, and it might have some hallucinations in there but I wouldn’t have read some of those otherwise.
I think the risk is this; when non-technical users who've never shipped software in their life can dictate to a machine and get "instant results" it going to bring back managers not understanding that you don't just ship code. Especially these days where one bad dependency can mean downtime or worse.
This was the issue with some the ads Apple was running when launching the iPhone 16. It showed the worst worker using Apple Intelligence to impress the boss and get promotions, which being generally lazy and terrible. I felt it was the wrong message to send. [0]
I don’t think AI is all bad for summaries though. I used to add stuff to a reading list with good intentions, but things went there to die. Hundreds of articles added, but with so much new content each day, I would never actually read any of it. Now, I use AI summaries to get more context on what the article is. If it sounds interesting and I want more info, I can read the whole thing in the moment. If I’m satisfied with the summary alone, I can move on with my life. No more pushing it off to a reading list that only generates guilt. I actually end up reading more articles due to this, not less.
[0] https://youtu.be/YP-ukrBVDH8 (this is sadly the best copy I can find)
Agreed. LLMs have helped me achieve much deeper reading, _when directed to do so_. Asking an LLM to “Teach me Socratically about this paper/code. One question at a time”, usually allows me to get a much deeper reading of the material than I would otherwise.
That's a different take than I've been considering AI to be genuinely useful. I try to not use it for deep work, infact I try to use it minimally but frequently for short checks on my own understanding.
Using your research paper reading example, I would read the research paper, but then ask an AI tool specific questions about the work, frequently in new chats. Then at the end I might ask it to implement my description of the paper. I guess it's your 'debate with me' conclusion, the only difference is I would try to have multiple short conversations.
Ironically, what you described is exactly using AI to help with deep work. You do the heavy lifting (reading), and use AI strictly for stateless verification and testing your mental model. That is the ideal synergy.
I have some algorithms I absolutely must know. So I’m hand coding them and asking the agent to critique me.
I do a very similar thing in writing - I need feedback, don’t rewrite this!
In both cases I need the struggle of editing / failing to arrive at a deeper understanding.
The future dev will need to know when to hand code vs when to not waste your time. And the advantage will still go to the person willing to experience struggle to understand what they need to.
I find value in learning some things deeply but not all things.
The ability to be more selective about where I attend deeply, while leveraging fast shallow learning to complete other tasks... That seems like a potential benefit and a nice choice to have in the toolbox.
trick is maintaining enough domain expertise... so we can actually audit those shallow outputs.
If the baseline knowledge drops too low we cannot tell when the AI is being lazy or wrong
[dead]
We need to allocate some % of our AI use to tackle this problem. Help us learn and find better abstractions and methods.
Weird post given it looks like an LLM wrote it.
I'm sick of hearing about AI, but I'm significantly more sick of anyone who knows how to write English prose at a level higher than "typical rural American" being accused of using AI to write.
Well that's the world we live in now.
It doesn't have to be. Comments such as yours add nothing to the conversation. It's an ad hominem attack. In the absence of explaining why you believe it "looks like AI", it's a baseless accusation
It has the typical patterns: em dashes, "it's not A, it's B." Also relatively new, low karma account, and its other comments are similarly LLM-ish.
I don't think it's all that bad. There's definitely vibe coding that is "copy paste / throw away" programming on ultra steroids. But after vibe coding two products and then finding them essentially impossible to then actually get to a quality bar I considered ready to launch, I've been working on a more measured approach that leverages AI but in a way that simply speeds up traditional programming. I use it to save tons of time on "why is pylance mad about X" "X works from the docs example but my slightly modified X gives error Y" "how do I make a toggle switch in css and html" "how am I supposed to do Python context managers in 2026 (I didn't know about the generator wrapper thing)" all that bullshit that constantly slows you down but needs to be right . AI is great at helping you kickstart and then keeping you unblocked.
I've been using Gemini chat for this, and specifically only giving it my code via copy paste. This sounds Luddite but actually it's been pretty interesting. I can show it my couple "core" library files and then ask it to do the next thing. I can inspect the output and retool it to my satisfaction, then slot it in to my program, or use it as an example to then hand code it.
This very intentional "me being the bridge" between AI and the code has helped so much in getting speed out of AI but then not letting it go insane and write a ton of slop.
And not to toot my own horn too much, but I think AI accelerates people more the wider their expertise is even if it's not incredibly deep. Eg I know enough CSS to spot slop and correct mistakes and verify the output. But I HATE writing CSS. So the AI and I pair really well there and my UIs look way better than they ever have.
Pure 'vibe coding' is essentially technical 'tittytainment'. Using AI for the horizontal spread while you enforce vertical architectural depth is true deep work.
What’s important? That bridges get built and stay up, or that they’re built only after toiling X amounts of hours. AI will change the nature of work, it’s going to make a lot of people uncomfortable. But more importantly, it’s going to let people who understand things faster get the info they need to be productive.
AI does not currently build bridges that stay up
I have a feeling we would all be terrified if we knew how much AI had a role in building bridges at the moment.
TBD if they stay up, I suppose.
The stories I hear from various white collar professions not related to tech are... interesting, to say the least. There is a whole lot of unsanctioned shadow IT going on regardless of policy.
Not really, there’s a lot it does right. But any automated tool or calculator will be as good as its operator.
[dead]