The writer says AI handled 18 footnotes, helped shape examples and framing with Claude, and drafted counterarguments, while the author retained control of core definitions, evidence and voice.
The account highlights risks of citation drift, misattributed studies and generic prose, requiring manual verification of sources including Knight Capital figures and repeated rewrites to preserve accuracy and style.
It argues AI works best when autonomy matches business risk and competitive differentiation, with the strongest value coming from adversarial critique and stress-testing rather than fully delegated authorship.
With AI investment soaring to $7.6 trillion, are we building a productivity boom or just the world’s most expensive fact-checking machine?
As AI accelerates work while increasing errors, are we trading speed for a hidden 'cognitive debt' that could trigger future crises?
If AI can write, code, and research, what is the truly 'unautomatable' human skill that will define professional value in the next decade?