> The more surprising part is the unusual reactions of the other people getting a better picture and context of what I’m explaining without the usual back and forth - which has landed me my fair share of complaints of having to hear mini lectures, but not more than people appreciative of the fuller picture.
It’s not surprising to me at all. People don’t tend to appreciate being lectured at - especially in a conversational context. Moreover, people really don’t like being spoken to as if they’re robots (which is something I’ve started to notice happening more and more in my professional life).
The fact that the author considers these reactions surprising and “unusual” betrays a misunderstanding of (some of) the purposes of communication. Notably, the more “human” purposes.
> The fact that the author considers these reactions surprising and “unusual” betrays a misunderstanding of (some of) the purposes of communication. Notably, the more “human” purposes.
Guess that's what early access to the internet and a pandemic during the final school years does to a person, ah well haha
> I was midway through explaining a concept he hadn’t covered when he stopped me. He pointed out that my way of speaking had completely changed and how it was unusually structured and didn’t give him the opportunity to ask follow up questions.
IMHO this sounds like a bit of an exaggeration in service of a specific narrative for the blog post, but language convergence has been a topic of conversation ever since the earliest autocomplete features appeared on smartphones.
The feedback loop in this situation is (LLM trains on people <-> People then train on LLMs).
Agree it's not common to give unprompted background context within any given normal conversation. People usually default to the pull style which Id agree is ultimately less efficient.
All that said, even though AI prompting is forcing the issue, which is good, the takeaway should be that _intentionality_ is very high leverage. Less so that it's because of some given (prompt) structure.
> The more surprising part is the unusual reactions of the other people getting a better picture and context of what I’m explaining without the usual back and forth - which has landed me my fair share of complaints of having to hear mini lectures, but not more than people appreciative of the fuller picture.
It’s not surprising to me at all. People don’t tend to appreciate being lectured at - especially in a conversational context. Moreover, people really don’t like being spoken to as if they’re robots (which is something I’ve started to notice happening more and more in my professional life).
The fact that the author considers these reactions surprising and “unusual” betrays a misunderstanding of (some of) the purposes of communication. Notably, the more “human” purposes.
> The fact that the author considers these reactions surprising and “unusual” betrays a misunderstanding of (some of) the purposes of communication. Notably, the more “human” purposes.
Guess that's what early access to the internet and a pandemic during the final school years does to a person, ah well haha
They "gave an exam" as in they administered it. They didn't graduate recently, they teach.
From the article:
> I was midway through explaining a concept he hadn’t covered when he stopped me. He pointed out that my way of speaking had completely changed and how it was unusually structured and didn’t give him the opportunity to ask follow up questions.
IMHO this sounds like a bit of an exaggeration in service of a specific narrative for the blog post, but language convergence has been a topic of conversation ever since the earliest autocomplete features appeared on smartphones.
The feedback loop in this situation is (LLM trains on people <-> People then train on LLMs).
Yeah, fanfic for sure.
I can get the guy to confirm it LOL
Agree it's not common to give unprompted background context within any given normal conversation. People usually default to the pull style which Id agree is ultimately less efficient.
All that said, even though AI prompting is forcing the issue, which is good, the takeaway should be that _intentionality_ is very high leverage. Less so that it's because of some given (prompt) structure.
"prompt engineer". Good God. It's come to this now.
my head went more like
person that engineers prompts = prompt engineer but I do see why it was weird now
anywho, more the reason to name the blog minddump
CPO - chief prompt officer
principal agentic coordinator
assistant (to the) prompt engineer
And if there is a prompt engineer, there must be also a prompt scientist, right?