Getting started with Logs

Lexer's Logs feature gives you complete visibility into how your customer data flows through the CDP. Whether you're monitoring routine data imports or troubleshooting specific issues, Logs provides a clear timeline of every data job - from initial file uploads through to final processing. You can quickly verify successful data loads, identify potential issues, and ensure your customer data is being processed correctly. This guide will show you how to use Logs effectively to monitor and maintain the health of your customer data.

Understanding data processing

When data enters the Lexer CDP, it flows through a sequence of six essential processing stages. Understanding this order helps you track your data's progress and quickly identify where any issues might occur.

Let's examine each stage:

  1. Integration or File Upload: The initial entry point for your data, whether through an automated integration (like Shopify or Klaviyo) or a manual file upload.
  2. Dataset Load: Your incoming data is loaded into a dataset, preparing it for processing within the CDP.
  3. Dataset Health Check: Automated checks verify data quality, focusing on:
    • Data volume consistency
    • Record recency
    • Identifying any rejected records
  4. Identity Resolution: Customer records are matched and merged across your datasets, creating a unified customer profile based on your identity resolution rules.
  5. Dataflow: Your data undergoes enrichment and organization processes, transforming it into Hub-ready format and standardizing attributes.
  6. Build Index: The final stage where your data is indexed for use throughout the Hub's interface, enabling segmentation and analysis.

Note: Not every data upload will trigger all six stages. The specific jobs that run depend on your data source and upload method.

Using the Logs interface

The Logs interface gives you a comprehensive view of all data processing activity in your CDP. Here's how to navigate and understand the main components.

Main Components

Each log entry displays six key pieces of information:

  1. Job Type - The specific process being performed, such as:
    • Integration Job
    • File Upload
    • Dataset Load Job
    • Identity Resolution
    • Dataflow Job
    • Build Index
    • History Activation
    • Delete Dataset Job
    • Activations
  2. Name - The identifier for the specific job
  3. Status - The current state of the job:
    • Success: Job completed successfully
    • Failed: Job encountered an error
    • Pending: Job is queued to start
    • In Progress: Job is currently running
  4. Source - Where the data originated from
  5. Created at - When the job started
  6. Duration - How long the job took to complete

Filtering

To help you find specific logs quickly, use the filtering options at the top of the page:

  1. Date Range - Default is Last 7 Days, but you can select any custom period
  2. Job Type - Filter for specific types of jobs
  3. Status - Focus on specific job statuses (e.g., only Failed jobs)
  4. Dataset - Filter by specific dataset (shown as datasetID - datasetname)

You can sort any column by clicking its header - an arrow will indicate ascending or descending order.

Viewing job details

Click on any log entry to see detailed information about that job, including:

  • Complete job metadata
  • Specific timestamps
  • Job-specific details (varies by job type)
  • For integrations: Provider and pipeline type information

Pro Tip: When working with support, you can share the exact job you're looking at by copying the URL from the job details view.

Job Types Explained

Job type Definition

History Activation

Exports historical event data from datasets for use in outbound integrations like Facebook CAPI.

Build Index

Creates searchable indexes that power segmentation and analytical capabilities.

Dataflow

Processes and enriches data to create standardized customer attributes and a single customer view.

Integration

Retrieves data automatically from integrated platforms like Shopify, Klaviyo, or other connected systems.

Dataset Load Job

Imports the received data into a designated dataset within Lexer

File Upload

Processes data files uploaded either manually through the interface or programmatically via API.

Identity Resolution

Matches and merges customer records across datasets to create unified customer profiles.

Delete Dataset

Removes a dataset and its associated data from the system.

 

Health checking your data

Regular monitoring of your Logs helps ensure your data is processing correctly. Here are key areas to monitor:

Daily Data Loads

  • Check that scheduled integrations are running successfully
  • Verify expected data volumes are being processed
  • Confirm processing times are within normal ranges

Investigating Issues

When you notice potential issues, you can:

  1. Use date filters to identify when the problem started
  2. Filter by job type to isolate specific parts of the process
  3. Look for any failed jobs in the sequence
  4. Check job durations for unusual processing times

Working with Support

Need help investigating an issue?

  • Copy the job's URL directly from your browser
  • Share this URL with your Success Manager or Support team
  • This gives them immediate access to the specific job details
  • Speeds up investigation and resolution time

Summary

The Logs interface is your central tool for monitoring data health in Lexer. From tracking data processing to troubleshooting issues, understanding how to use Logs effectively ensures your customer data stays accurate and up-to-date. Remember these key points:

  • Monitor your data processing through the six stages: from upload through to build index
  • Use filters to quickly find specific jobs or investigate issues
  • Share job URLs with Support for faster problem resolution
  • Make regular health checks part of your data management routine
Updated:
December 12, 2024
Did this page help you?
Thank you! Your feedback has been received!
Oops! Something went wrong while submitting the form, for assistance please contact support@lexer.io