Skip to main content

Beyond Ctrl+F - Use LLM's For PDF Analysis

Listen:

PDFs are everywhere, seemingly indestructible, and present in our daily lives at all thinkable and unthinkable positions. We've all got mountains of them, and even companies shouting about "digital transformation" haven't managed to escape their clutches. Now, I'm a product guy, not a document management guru. But I started thinking: if PDFs are omnipresent in our existence, why not throw some cutting-edge AI at the problem? Maybe Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) could be the answer.

Don't get me wrong, PDF search indexes like Solr exist, but they're basically glorified Ctrl+F. They point you to the right file, but don't actually help you understand what's in it. And sure, Microsoft Fabric's got some fancy PDF Q&A stuff, but it's a complex beast with a hefty price tag.

That's why I decided to experiment with LLMs and RAG. My idea? An intelligent knowledge base built on top of our existing PDFs. Imagine asking a question and getting a precise answer, with the relevant document sections highlighted, without having to sift through pages of dense text.

Why RAG? Contextual Search

Retrieval Augmented Generation (RAG) is a fancy way of saying that your LLM gets a little help from its friends. Instead of just relying on its internal knowledge, RAG taps into a separate "retrieval" step that finds relevant information from an external source (like your PDF collection). This means you get answers grounded in your specific documents, not just general knowledge scraped from the web.

Why is this such a big deal? Because context matters! Think about those times when you needed to:

  • Trace a Transaction: You've got a bank account number, but it appears in multiple financial reports. RAG can pinpoint the exact documents where the number is relevant to a specific transaction.
  • Summarize Research: You've got a stack of whitepapers on a new technology. RAG can distill the key findings and compare them across documents, saving you hours of reading.
  • Answer Complex Questions: You want to know your company's stance on a specific policy issue. RAG can scan through internal documents and provide a nuanced, contextualized answer.

RAG isn't just about finding keywords, it's about understanding the meaning and relationships within your documents.

DIY LLM-Powered Search

So I sat down and read the documentation of langchain, transformers, vector search and tinkered a python script together which works for me, but also can be the basis for much more advanced tools or apps - docai (see on GitHub).

There are two python scripts, one uses the Huggingface's model registry online, the other one looks for a local model and tries to find it at HuggingFace to download the necessary definitions. I've split both into 3 parts: check the setup, prep the setup, execute:

  1. Checks Your Setup: Makes sure you've got the right tools for the job. It verifies your Python version (3.9 or higher) and installs any missing libraries needed for LLM magic (like langchain, transformers, etc.).

  2. Loads the PDF: Uses a library called PyPDFLoader to grab all the text from your PDF file.

  3. Breaks It Down: The text gets chopped into smaller chunks using RecursiveCharacterTextSplitter. Think of it like cutting up a giant pizza into manageable slices.

  4. Builds a Knowledge Base: These text chunks are embedded (converted into numerical representations that capture meaning) using a pre-trained SentenceTransformer model and stored in a FAISS vector store. This is the real-time super-efficient search index.

  5. Asks & Answers: When you ask a question, the script performs a similarity search in the vector store to find the most relevant chunks of text. Then, it feeds those chunks and your question into a language model from Hugging Face (you get to choose which one!). The model generates an answer based on the context it's been given.

Why Do I Like It?

This simple script harnesses the power of two cutting-edge technologies:

  • Large Language Models (LLMs): These models are trained on massive amounts of text data, so they're great at understanding language, summarizing information, and generating answers.
  • Retrieval Augmented Generation (RAG): This combines the power of LLMs with information retrieval. By searching your PDF collection for relevant context, RAG gives your LLM the information it needs to give you accurate, targeted answers.

TL;DR

Using LLMs and RAG to layer the content of your files into a contextual search engine can revolutionize how you utilize your PDF archives. This allows you to build a comprehensive knowledge system based on historical information, accessible and beneficial to everyone in your organization. The LLM/SLM translates natural language queries into vector-semantic searches, providing relevant answers grounded in the data you provide, effectively reducing hallucinations and improving accuracy. This approach is straightforward and focuses on delivering practical results rather than getting bogged down in complex technology.

Comments

Popular posts from this blog

Deal with corrupted messages in Apache Kafka

Under some strange circumstances, it can happen that a message in a Kafka topic is corrupted. This often happens when using 3rd party frameworks with Kafka. In addition, Kafka < 0.9 does not have a lock on Log.read() at the consumer read level, but does have a lock on Log.write(). This can lead to a rare race condition as described in KAKFA-2477 [1]. A likely log entry looks like this: ERROR Error processing message, stopping consumer: (kafka.tools.ConsoleConsumer$) kafka.message.InvalidMessageException: Message is corrupt (stored crc = xxxxxxxxxx, computed crc = yyyyyyyyyy Kafka-Tools Kafka stores the offset of each consumer in Zookeeper. To read the offsets, Kafka provides handy tools [2]. But you can also use zkCli.sh, at least to display the consumer and the stored offsets. First we need to find the consumer for a topic (> Kafka 0.9): bin/kafka-consumer-groups.sh --zookeeper management01:2181 --describe --group test Prior to Kafka 0.9, the only way to get this in...

Run Llama3 (or any LLM / SLM) on Your MacBook in 2024

I'm gonna be real with you: the Cloud and SaaS / PaaS is great... until it isn't. When you're elbow-deep in doing something with the likes of ChatGPT or Gemini or whatever, the last thing you need is your AI assistant starts choking (It seems that upper network connection was reset) because 5G or the local WiFi crapped out or some server halfway across the world is having a meltdown(s). That's why I'm all about running large language models (LLMs) like Llama3 locally. Yep, right on your trusty MacBook. Sure, the cloud's got its perks, but here's why local is the way to go, especially for me: Privacy:  When you're brainstorming the next big thing, you don't want your ideas floating around on some random server. Keeping your data local means it's  yours , and that's a level of control I can get behind. Offline = Uninterrupted Flow:  Whether you're on a plane, at a coffee shop with spotty wifi, or jus...