Ollama makes it pretty easy to run inference on a bunch of model-available releases. If a company is after code/text generation, finding a company/contractor to fine tune one of the model-available releases on their source code, and have IT deploy Ollama to ask their employees with M3 MacBooks, decked out with 64 GiB of RAM is well within the abilities of a competent and well funded IT department.
What recognition has Facebook gotten for their model releases? How has that been priced into their stock price?
That's completely different scale. You're not going to run GPT4 like a random ollama model. At that point you need dedicated external hardware for the service, and proper batching/pipelining to utilise it well. This is way out of the "enough ram in the laptop area".
What recognition has Facebook gotten for their model releases? How has that been priced into their stock price?