5 key things to think about when implementing your own Artificial Intelligence Model

The are many potential issues you can run into when implementing your own AI models. Here are 5 key things to think about when implementing that will reduce the likelihood of a slip up.

The appearance of the many open source large language models (LLMs) that emulate ChatGPT3 and 4 level responses has been surprising, especially given the expensive GPU requirements when trying to train models.

HuggingFace, the home of open source artificial intelligence, has got a ever growing number of collaborators and models that are exploring so many interesting use cases. A lot of effort by these collaborators has gone into getting LLMs running locally on consumer devices and creating models that can be trained on (albeit expensive) consumer devices. This is making the technology more accessible to business.

It is so exciting to be able to download and run an local chatgpt clone on your device and train it on your own proprietary data to provide services to your customers. It does come with risks though. Here are 5 things that need to be considered:

Open Source does not always mean open source

 

Many of the models out there claim to be open source, but the licences need to be read carefully. Many have clauses in that they are not available for commercial use (most that are derivatives of LLAMA, for instance). If you are going to spend a lot of time and money training a model on your data to meet a commercial use case, you need to make sure you are starting with the right model, or you could be wasting your time and money. Do your research and involve your legal team if need be.

Be careful about all of your future requirements – think up front how you are going to address them

 

As an example, most of the open source models are great in English as they are often trained in English, but what if you need to be multi-lingual? How are you going to tackle this requirement in a cost-effective way? This requires some design work and thinking up-front to figure out how to manage multiple languages. Gathering your non-functional requirements up-front and working out how to tackle them will be key to your long-term success. All the usual non-functionals need to be considered as they are not easy to tackle.

Data, privacy and ethics

 

You need to consider the ethical implications and data privacy concerns associated with developing and deploying an LLM. Many jurisdictions (like the EU) are working on AI specific laws that at some point you are likely to have to comply with. How are you going to reliably keep customer/client data secure in the language model? What do you need to consider in the context of your business and use case? One other key thing that needs stressing is that your LLM is only as good as the data it is trained on. If you data is in a poor or incomplete state, you do not have strong foundations to grow on.

Bias

 

If your training dataset has bias in it, so will your model. And here’s the thing, every dataset has some level of bias in it as it is created by humans. Bias can come out in unforeseen ways. You need to have a robust process for managing bias prior to releasing your LLM and after release – it is unlikely you are going to see all biases before it gets into your customer’s hands.

Cost

 

Experimenting and implementing artificial intelligence is getting less costly, but still has a high cost. Training models currently (June 2023) require expensive tech. There is also a high demand for talent in this area, driving up resource costs. Running costs can also be high. Businesses should look at the investments they plan to make in their own AI capabilities and size appropriately – this is new tech and success is not guaranteed.

Businesses need expertise in many areas to be successful in AI – and even then success is not guaranteed. Get expert advice, think about the whole and keep up on developments – this field is moving incredibly rapidly.

If you are a business leader looking for more information about AI, why not take our AI for Business Leaders course?

 

 

Disclaimer: This blog post was not written by AI

More Insights

AI is the new great disruption of our time

We are reaching an inflection point for society and business. There is a lot of hype around AI, as is often in the tech industry (remember the VR hype anyone?), but if you look a little deeper the signs of great change are here.

Read More

Need Help To Maximise Your Business?

Get in touch today and schedule a no-obligation call to discuss your business