Recently, we had the chance to speak with Amit Attias, CTO and Co-Founder at Bigabid. Bigabid is a data company that uses machine learning to help companies of all sizes market their mobile applications through user acquisition and retargeting. In our conversation, we talked about why they moved off of spreadsheets to track their models and started using Fiddler, and what Amit’s advice would be to other managers who work with ML systems.
Q: How are you using Fiddler technology?
A: We have thousands of machine learning models in production. We discovered we had to monitor them to understand the drift and whether it’s in a feature or in a prediction, and if there’s anything wrong going on in production. We use Fiddler to monitor those drifts.
We also use Fiddler for explainability, whether it’s “Why did this happen?” or “Why didn't it go that way?” And, was something changed? Why are we seeing different results?
Q: What were you doing before you used Fiddler?
A: Before we used Fiddler, we developed something in-house — a kind of super-sophisticated spreadsheet. We looked at other companies that struggled with the same challenge, and most of them did the same. We all had huge spreadsheets that we needed to refresh every day. They were really hard to maintain and didn't give us what we wanted exactly. Then we started looking at other solutions and found Fiddler.
Q: Why did you choose Fiddler?
A: We probably looked at five or six other solutions. I preferred Fiddler because, as a manager, the most appreciated value was that they understood both the data scientist’s perspective and the manager's perspective. I could get the visuals that I need as a manager, like the dashboards. And the explainability I needed, not only model monitoring and other technical stuff that the data scientists need.
When we looked at other companies and they said something like, "We are separating between monitoring and BI, and we're not doing BI." For me, it wasn't BI. It was just part of the monitoring, that as a manager, I wanted to have. Monitoring is something that you need at each level. I don't want to monitor whatever the data scientist wants to monitor, but we both need to monitor something. And that was Fiddler's approach.
Q: What was the impact of Fiddler on your work and team?
A: First, we didn't need to maintain our own solution, which is a major impact. I think that using software as a service, firstly, is the better way to run faster. We also had the impact of getting to deploy faster. And we are able to trust the system to alert if something happens.
For example, we got the solution to what happens if a data engineer changes something in production without telling the data scientist when they don't know that they're not synced, which might affect the model. That’s actually happened; we saw that in production. We had some anomalies with the features and data drift that the data scientist could see immediately and find out that the data engineer deployed something to production that changed the predictions. Previously it would take us weeks or months to discover that. So I'd say that was the greatest impact. We could save time, and we could trust the system.
“Bigabid provides state-of-the-art scientific advertising using AI at scale. Our rich data pipelines support thousands of machine learning models that we regularly optimize.
Using Fiddler to maintain and improve our ML performance in conjunction with AWS services for reliable scale, means we can flex and react at the speed of business."
— Amit Attias, Co-Founder and CTO at Bigabid
Q: Why is Model Performance Management important to your business?
A: Our system trades in advertisement properties. So if I by mistake we advertise to the wrong user, and by mistake, we bid with the wrong price, that will affect the end results, the revenues or the profit. Machine learning is at the core of our product — the algorithm is the one that decides how much to bid.
So if I'm able to track those problems quickly, I can immediately affect the results Being able to understand where I'm wrong with the bids and where I'm drifting has a direct impact on revenue and profit.
Q: What advice do you have for other companies using ML?
I think most companies don't realize how different the production is from the training or development environment. When you tell them they need to monitor the production drift and data, they just think that if they see good enough results means that everything is fine. I would tell them that, first, it's not fine, and second, when something goes wrong, it's really complicated to find out why. I wouldn’t go without monitoring in production.