Data harvesting superapp admits it struggled to wield data - until it built an LLM

Asia's answer to Uber, Singaporean superapp Grab, has admitted it gathered more data than it could easily analyze - until a large language and generative AI turned things around.

Grab offers ride-share services, food delivery, and even some financial services. In 2021 the biz revealed it collects 40TB of data every day. Execs have bragged that its fintech arm knows enough about its drivers that it can rate their suitability for a loan before they even bother applying.

In a Thursday blog post, the developer admitted it has sometimes struggled to make sense of all that data.

"Companies are drowning in a sea of information, struggling to navigate through countless datasets to uncover valuable insights," the org wrote, before admitting it was no exception. "At Grab, we faced a similar challenge. With over 200,000 tables in our data lake, along with numerous Kafka streams, production databases, and ML features, locating the most suitable dataset for our Grabber's use cases promptly has historically been a significant hurdle."

Prior to mid-2024, Grab used an in-house tool called Hubble - built on top of the popular open source platform DataHub and utilizing open source search and analytics engine Elasticsearch - to sort through its giant data pile.

"While it excelled at providing metadata for known datasets, it struggled with true data discovery due to its reliance on Elasticsearch, which performs well for keyword searches but cannot accept and use user-provided context (ie it can't perform semantic search, at least in its vanilla form)," Grab's engineering blog explains.

Eighteen percent of searches were abandoned by staff users. Grab guessed the searches were abandoned because the Elasticsearch parameters provided by Datahub were not yielding helpful results.

But Elasticsearch wasn't the only problem to blame for laborious data discovery - oodles of documentation was missing. Only 20 percent of the most frequently queried tables had any descriptions.

The developer's data analysts and engineers were forced to rely on internal tribal knowledge in order to find the datasets they needed. Most reported it took days to find the right dataset.

Grab sought to rectify this through three initiatives: enhancing Elasticsearch; improving documentation; and creating an LLM-powered chatbot to catalog its datasets.

The Singaporean superapp enhanced Elasticsearch by boosting relevant datasets, hiding irrelevant ones, and simplifying the user interface.

Eventually it brought the number of abandoned searches to just six percent. It also built a documentation generation engine that used GPT-4 to produce labels based on table schemas and sample data. That effort increased the number of data sets with thorough descriptions from 20 to 70 percent.

And then it built the pièce de résistance: its own LLM. Called HubbleIQ, the LLM uses an off-the-shelf search tool called Glean to draw on its newly expanded descriptions and recommend datasets to its employees through a chatbot.

"We aimed to reduce the time taken for data discovery from multiple days to mere seconds, eliminating the need for anyone to ask their colleagues data discovery questions ever again," the superapp techies blogged.

The upgrades are a work in progress. Grab intends to work to improve the accuracy of its documentation and incorporate more dataset types into its LLM, in addition to other initiatives.

Grab's hyperlocalization strategy, which is enabled by its massive quantities of data, has given it the edge to know the ins and outs of Asia's people and roads - and frankly kept the business alive.

While its 2021 IPO results may have been unquestionably disappointing, it did run Uber out of town.

In Grab's Q2 2024 earnings, it reported a record high of 41 million monthly transacting users, narrowing losses and 17 percent revenue growth.

"Features like mapping, hyper batching and just-in-time allocation, they're all unique to Grab and none of our competitors have that and we believe that makes us consistently more reliable as well as more affordable," explained CEO Anthony Tan.

Consistently reliable, affordable ... and drowning in datasets. ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Sep 27
Cockroach Labs CEO: Diverse database models are essential as app demands surge

Interview Licensing mixes also needed lest vendors give too much away

Sep 27
Sep 27
Integrating GenAI into business-critical applications

Webinar Find out how to unlock innovation potential with the help of AWS and SAP

Sep 27
Fedora 41 beta arrives, neck-and-neck with Ubuntu - but with a different focus

Text Edit emerges, plus tinted terminal title bar when it's time to tread tactfully

Sep 27
OS/2 expert channeled a higher power to dispel digital doom vortex

On Call 'He sat in a chair, rubbed his temple, and began to recite syntax as if performing magic'

Sep 27
Data harvesting superapp admits it struggled to wield data - until it built an LLM

Engineers at Grab don't need to ask each other questions any more

Sep 27
Japanese orgs now paying salaries direct into e-wallets

Starting at SoftBank, using its own PayPay service