Function calling on llms
OpenAI function calling reminds me of the feature engineering paradigm shift after deep learning.
For deep learning, we went from engineering features to thinking about merely having a good basis (in linear algebra terms) for the model to engineer its own features.
Similarly...
Before function calling, we had to decide how we were going to slice our prompt into stages and send them separately to the LLM, then have a parser for each output and have good logic on piecing all of that information back together. (and yet again sending it to LLM for completion).
With function calling, we simply give the LLM the tools needed (a good basis that spans the space) and let it choose when to use them. Eliminating a lot of staging that needs to have input prompts and output parsing.
Here’s an explicit intuition of what’s going on from a paper called Toolformer: Language models can teach themselves to use tools. This is probably close to how it's implemented by OpenAI.