Artificial Intelligence will solve our problem. Lot's of entrepreneurs have a grand vision of how AI can solve any difficult problem for their business. This doesn't mean they're wrong, but the fields of AI and ML (machine learning) have many sub categories and areas of active research that allow us to make progress on quite specific computational tasks. Applying the wrong form of learning or problem solving to a task can produce underwhelming results. Tweaking and tuning a model or method is the way to get solid results and outpace your competitors in business use cases. I'd like to go through a few ML methods and what they're used for maybe to inspire some new ideas.
The first method that comes to mind when speaking of ML is neural nets, and more recently, what we're calling "deep learning", which is significantly larger neural nets. Our brains are made of of neurons, which are complex cells that transfer information among them. They have an action potential that determines if they fire based on stimulation from neurons earlier in pathway. This action potential has been simulated in a mathematical function, and a matrix of neurons each with their own weights connect input and output layers. These layers in the simplest form will give decisions. For example, you can have an image, and break down the pixels into three color values, and lay them all out in a row. Feed those numbers into the matrix that has many neurons and a few layers, that reduce in size until there's one output value of "is this image of a dog". This is a simple version of how a neural net would work. Deep learning has been used in many business applications. Chatbots, reality detection, art and composition, image detection, robotics planning, image captioning, bank fraud detection, self driving strategy, extrapolating color in black and white, OCR, alpha star. This is used in a ton of places.
Adversarial Neural Nets
There is a sub category called adversarial neural nets where you can have one net generating "fake" versions of an image or data and another neural net grading its authenticity. The two evolve and learn together similar to mimicry in evolutionary biology. These are often used to generate video, audio, or image samples that seem realistic to human evaluation.
Classification buckets responses from the model into categories. For a simple image classifier, you can have dogs, cats, and neither dog nor cat. There are multiple implementation methods for this broad category, but it can be used in a number of business cases. It can be used to predict which product to recommend to a user, given past history and possible buying behavior. It can be used for picking and packing for industrial robots, or for categorizing user types on a social network given their friend group and specified interests.
Regression generally predicts a number or an amount of something. How much TV is this customer likely to watch. Or how far will this car travel this year. The answer is a quantity on a number line, and can have a wide span. Some use cases can help determine consumption of a certain resource, like bandwidth, energy, orange juice. Or how much a client might spend with the firm. It can also be used for optimization of systems, as in: how dimensions should the carpet be to appeal to the most customers, or what time should we send this marketing email for best response.
Genetic algorithms fit closer into the AI space, though I see them all as methods to solve different classes of problems. GAs takes the system of natural selection and turns into into a computational numerical problem. Three main things must be defined. The definition of an organism (its traits and parameters), a way to alter that organism from one generation to the next (sexual reproduction, asexual reproduction, random mutation), and a fitness function (how well did the organism survive in the environment it's set in). The simulation is run, and the fittest individuals of each generation reproduce together to spawn the next generation. This is then repeated until the desired fitness is achieved. Generally, fit individuals when changed slightly will create fit individuals, some more so than others, so you will reach (at the very least) a local optimum. This is easy to see how it would be a good model to simulate and view populations of creatures in an ecosystem that have a certain metabolism, energy, and reproductive abilities, but the algorithm is much more powerful. A good example would be a candle optimizer. The traits could be height, width, ingredients in the wax, thickness of the wick. Reproduction could be a random slight change, or an average of the two (if sexual). The fitness function could be time of burning minus cost. Then we can run this simulation to determine the best candle given our parameters. Streamlining air flow models, packaging, formulae, resource allocation in a company could all be possible applications.
While not an official name for a method, similar to genetic algorithms, tournaments have been used to constantly play games against other AIs in a meta computation method. Deep Mind has used this tactic many times, in Chess, Go, Starcraft. It creates a game playing algorithm with a number of meta parameters (how this is done is irrelevant for this discussion, but often a deep neural net). Then it spawns a number of those algorithms with various parameters, similar to the organisms in the genetic algorithm. Then they all play each other in a tournament. The winners reproduce in a similar genetic mixing to GAs. The tournament acts as the fitness function. This largely is useful for adversarial games where you can pit one solution against another in a simulated task to determine one winner.
Transfer learning takes a CNN for one task and without having to retrain from scratch can be used on a similar task. Let's say you have an image classifier to determine dog or no dog. Now you want it to determine cat or no cat. You can use the weights in the neural net from your original training to help you get to your goal significantly faster. The basic idea behind this is that the neural net at the earlier layers has understood concepts that are generic. What is a line, or an outline, or fur, or background, or a leg. Once it has those simple building blocks, we can transfer that to another model to use for a different purpose. This method is often used when you already have the ML solution for one task, and you want to expand its capabilities without investing in new model training.
This is a relatively niche method that I've seen in the poker playing world. Poker is a game of imperfect information as opposed to chess or go. You don't know the nature of the other person's holdings, and yet you must act in your best financial interest. It has been discovered that the easiest way to calculate and act in this scenario is to use regret minimization, or look at the problem from the opposite perspective, not maximizing dollars, but minimizing regret. What action should I take on this street (call, bet, fold), in order to minimize my future regret or losses. This might be helpful for novel research in negotiation, exploration, or other imperfect information scenarios.
There are a ton of methods in ML that can be used for business use cases. More important than a solid implementation is a deep understanding of the problem and correct choice of method. Reach out if you need some guidance on a problem you're solving.