Read the original post on the CTO website here.
The Berkeley Language Center and the Language and AI working group at the Townsend Center for the Humanities held a conference this week– Language and AI: Generating Interdisciplinary Connections and Possibilities. Organized around three pillars (research, industry, and theory) with academic talks and panels, this event brought people together from different interdisciplinary perspectives to explore how language/culture learning intersect with AI.
As I noted in my brief opening remarks introducing the event (reproduced below), making interdisciplinary connections around language and AI is the exact right thing we should be doing at Berkeley in 2024. Often (even at a University like UC Berkeley!) many of the discussions around AI focus predominantly on how to leverage AI for administrative uses (Can the institution use generative AI to save money? Could it improve the student experience? If we use it how do we balance risks and opportunities?). This week’s conference was a reminder of the role academics at UC Berkeley play in building deep knowledge and –when it comes to AI– testing the affordances and limits of the technology. Even more importantly their discoveries amount to foundational knowledge and know-how directly applicable to society– in other words, translatable across many domains beyond those of the initial academic inquiries.
At the conference, academics both technical and humanists, or a mix (along with participation from industry) showcased how we are collectively exploring long standing academic research and inquiry, using AI in a very hands-on way.
The academics at the conference are collaborating with STEM researchers to use (and further develop themselves) cutting edge techniques using modern generative AI. While the media tends to focus on consumer facing AI applications and their challenges- any organization trying to build an innovation layer around AI would do well to mitigate risk and build value with internally facing tools.
This aligns with how I’ve written about the 3 step process to AI deployment here: 1. enablement: get access to the technolologies with appropriate contracts, guard rails and use-cases, 2. accelerate knowledge: build teams (or in the case of large, decentralized organizations launch an “AI Community”, 3. establish “functional governance”: center more complex functional decisions about AI with business/academic functions (rather than IT setting functional parameters around AI governance- let business/academic functions do this within the organization’s existing decision-making processes. Note that all three parts may progress concurrently — no need to limit functional governance based on foundations or the state of the community). Central to all these steps is hands-on, practical, safe access and experimentation with AI.