They talk about his passion for technology, how we work with machine learning and data science at sennder, and if deep learning is the future of artificial intelligence.
Listen to the podcast and read Notger’s key points in the summary below.
At sennder, Notger works in the business area supporting data-focused decision-making and data-driven tooling. His main focus is on pricing and recommendation while leading a team of machine learning engineers. He’s been with the company since March 2020.
On his passion for machine learning (ML) and artificial intelligence (AI)
Notger has always been interested in modeling and specialized in systems modeling, dynamics, and control theory at university.
After getting a Ph.D. in the field, he headed a consultancy company focusing on the same topics and discovered ML about 7-8 years ago. As ML became more trendy, he saw it as a super interesting toolset to complement his already established regular modeling skills.
What always attracted him to modeling is trying to make sense of something, defining objects, interactions, rules, and relationships. Setting things up and pulling strings to see what would happen.
Going from theory, modeling your systems, designing your controller, and having an idea of how it is supposed to work to then flipping the switch and seeing it actually, really working is always a magical moment.
On applying AI and innovation in business and sennder’s unique position.
He’s mostly worked in smaller teams and companies with clear goals and interfaces, allowing one to move fast when green-fielding, building things from scratch. And that experience influences how he approaches work.
sennder is a digital freight forwarder, sitting between shippers and carriers. Shippers are few and large, e.g. Coca-Cola, Amazon, Volkswagen, but carriers are usually smaller family-owned companies with approximately 50 trucks. So we do the risk and size transformation solution. The logistics industry is super traditional in the sense that business is still done with phone calls and fax machines, so there is a huge opportunity to bring these workflows into the digital age. And that’s what we’re doing.
So far we’ve been successful in our growth story. We raised our Series D funding this past January, which closed with additional funding in June, on the heels of merging with Everroad and acquiring Uber Freight Europe in 2020.
We try to address our business’ core challenges with our automation and ML initiatives.
Currently, we have two big challenges:
- Ensuring efficient and scalable workstreams internally.
- And managing search-space complexity for our carriers.
The problem related to ensuring efficient workstreams as we scale is rooted in the fact that our business is interaction-heavy. This means that the total costs and expenses also scale with the business growth. So we want tools that make us more efficient as the business grows.
The problem of managing the search-space complexity revolves around the issue that the amount of loads is growing as is the number of carriers in our network. This combination of growth on both sides means that we would become a problem if we were to show every open load to an interested carrier. A carrier can’t comb through 2000 possible loads to find the most suitable one in a reasonable amount of time, especially not if you consider that the carrier would have to do so on dozens of platforms. So we also have to control the search-space complexity.
Our response to these challenges is using ML-systems to make use of shortcuts or prune our offering to what is most likely interesting.
We supply price estimates and automated order flow, where you can just click on the “buy now”-button, like you would on major e-commerce platforms. And we also support operations with tools that increase their efficiency.
Furthermore, we address the search-space complexity by recommender systems that match carriers to loads, so when a carrier comes to our website they get 10 suitable loads recommended rather than have to browse through those 2000 random loads.
You might also be interested in: Agile data science at sennder.
On the challenges of applying AI to companies.
The biggest challenges with ML are about:
- Getting the data.
- And getting stakeholder support.
If you do not gather enough data or you’re not the owner of the data gathering process, you’re at the mercy of someone else’s road map and that can be a bad position to be in. The successful business models in recent years have in common that they built data gathering machines at their very core. As tricky as it can be to compare to brands like Google, Facebook, Amazon, etc., you find this element in their models.
For example, the Google search engine is as convenient for the user as it is an ad serving machine and a data-gathering machine. The android system that Google is supporting is primarily a platform to sell stuff and gather data. So if you are a company that puts your data science together, pushes it to some corner, and doesn’t discuss data gathering it’s not going to work.
Getting stakeholder buy-in is super important and tricky for different reasons. One of them being that stakeholders might speak a totally different language than you. They might also be resistant to an outside agency or have a fear of being automated away. There might also be political and hierarchical power plays that make it a very complex and tricky landscape to maneuver.
Getting everyone on the same page and moving in the same direction is not something a ML engineer can do on their own, it’s something that everyone has to participate in. It’s as much of a communication effort as it is a technical effort.
As the same holds true for the data gathering topic, it becomes clear that any ML system always becomes an effort that involves the whole company end-to-end. If you don’t have a strategy for how to rally everyone under the same flag, but instead think that a budget and a “head of department” will magically solve everything, then best not start at all.
On whether deep learning is the future of AI.
Deep learning has had awesome successes, like language models and image recognition. However, certain limitations are already visible, which makes it unlikely that deep learning is going to be the main method or driver of business revenue. It’s just going to be one out of several approaches.
There is a scaling issue. If you consider what deep learning models currently can do, you see that each new model only brings in a marginal improvement compared to the prior. However, each new model also becomes exponentially bigger, so it consumes exponentially more energy, needs more machines, and their training costs increase.
Considering climate change issues and the energy they need, is it sensible to support these kinds of models? This is one core blocker.
Another blocker is that the models aren’t providing clarity, reason, or understanding, they are just statistical analogy pattern machines. This means that they make assumptions based on the patterns they see in the data, but there is absolutely not even a semblance of intelligence behind them to understand the context.
What you put into them is also what you get out of them.
If you look at language models, the statistical pattern machine replicates any bias found in the data. And if you look at our western literature and the internet, you find certain biases towards certain groups of people. E.g. Women are more likely to be associated with kitchen stuff by those language models. And there are certain groups in society that are more likely to be associated with negative traits like laziness or committing crimes. And that is definitely not the models we want to have. However, by the very nature of how these models are set up, you can’t avoid it.
So what will be the future of AI? It might be a mix of a ton of methods that are yet to be devised. For example, finally having Reinforcement Learning becoming useful and Causal Modeling being at the core of development architecture. But that could be an article topic on its own.