Allow me to begin with the BLUF (the “Bottom line up front,” as they say in the military): My view is that AI can be a powerful assistant for tasks for which we already have expertise. However, I am quite wary of AI’s use for tasks on which the user can’t personally validate the quality of the answer the machine produces.
Great interview. I agree with just about everything Garrett Berntsen says. I've followed the work of the data office at State very closely since it opened. They've been doing important work. Not just conducting good analysis, but also shifting the culture of an organization that is distrustful of new approaches to foreign policy. Garrett is right: there's a lot of opportunity to using new technology. But we also have to collectively hold the line against inappropriate uses of new technology, especially as pushed by industries trying to profit off of government. The worst-case scenario is that expensive new technology is adopted but fails to provide value, engendering distrust.
Did you catch aspects of the interview that you think conflict with my perspective? Or that you disagreed with?
Generally I felt like your perspectives jibed well. I think Garrett might have been a little more willing to push folks out of their comfort zone using LLMs but the difference was a matter of degrees. Thank you for responding.
That's probably right. I think I'm a bit biased: I think the institution should prioritize building individual expertise at the same time as it implements more technical tools like AI. But as you point out, it's a matter of degrees.
I would like to hear your thoughts on the work of the State Department's Deputy Chief Data and AI Office. discussed in detail here - https://www.chinatalk.media/p/the-future-of-ai-diplomacy-can-the
Great interview. I agree with just about everything Garrett Berntsen says. I've followed the work of the data office at State very closely since it opened. They've been doing important work. Not just conducting good analysis, but also shifting the culture of an organization that is distrustful of new approaches to foreign policy. Garrett is right: there's a lot of opportunity to using new technology. But we also have to collectively hold the line against inappropriate uses of new technology, especially as pushed by industries trying to profit off of government. The worst-case scenario is that expensive new technology is adopted but fails to provide value, engendering distrust.
Did you catch aspects of the interview that you think conflict with my perspective? Or that you disagreed with?
Generally I felt like your perspectives jibed well. I think Garrett might have been a little more willing to push folks out of their comfort zone using LLMs but the difference was a matter of degrees. Thank you for responding.
That's probably right. I think I'm a bit biased: I think the institution should prioritize building individual expertise at the same time as it implements more technical tools like AI. But as you point out, it's a matter of degrees.