RAG pipelines split your page into fixed-token chunks (256, 512, or 1024), embed each, and retrieve the best-matching chunk when a user asks a question. If your content only makes sense when read in order, RAG fails — the LLM sees a chunk full of "it" and "this" with no antecedent. The scorer splits the page three ways and scores each chunk.