Philip Arkcoll
August 19, 2025

Building a Manager-Effectiveness Score in Google BigQuery: A Reproducible SQL Playbook

Introduction

Manager effectiveness drives organizational success, yet most companies rely on lagging indicators like employee satisfaction surveys to measure leadership performance. (Manager Effectiveness: 5 Metrics That Matter More than eSat Scores) The shift toward real-time, data-driven metrics is no longer optional—it's essential for competitive advantage. (How to Measure Leadership Performance)

This technical playbook demonstrates how to build a comprehensive manager-effectiveness score using Google BigQuery, incorporating four core metrics: 1:1 frequency, coaching minutes, AI-prompt usage, and network centrality. By leveraging anonymized calendar and collaboration metadata with privacy-safe hashing techniques, data teams can create a composite score that updates daily and provides actionable insights into leadership performance.


The Four Pillars of Manager Effectiveness

Effective managers balance hard and soft skills, including strategic planning, decision-making, communication, empathy, and adaptability. (How to Measure Leadership Performance) Research shows that top-performing managers share five key behaviors: they provide regular coaching, define reasonable team norms, support without micromanaging, elevate teams through organizational connections, and engage routinely with team members. (Manager Effectiveness: 5 Metrics That Matter More than eSat Scores)

Our BigQuery implementation focuses on four measurable dimensions:

1. One-on-One Meeting Frequency

Manager 1:1 frequency and cancellation rates serve as key indicators of manager effectiveness. (Manager Effectiveness: 5 Metrics That Matter More than eSat Scores) The number of scheduled touchpoints with managers correlates highly with positive survey outcomes and reduced attrition risk.

2. Coaching Minutes

Time spent in coaching conversations reflects a manager's investment in team development. This metric captures both formal coaching sessions and informal mentoring moments embedded within regular meetings.

3. AI Tool Adoption

Modern managers who embrace AI tools demonstrate adaptability and forward-thinking leadership. (Workplace HR Data Integrations) Tracking AI-prompt usage across tools like ChatGPT, GitHub Copilot, and Microsoft Copilot provides insights into a manager's willingness to leverage technology for team productivity.

4. Network Centrality

Effective managers elevate their teams by connecting them to other parts of the organization. Network centrality measures a manager's position within the broader organizational communication graph, indicating their ability to unblock and support their teams.


Data Architecture and Schema Design

Core Tables Structure

Our BigQuery implementation requires four primary tables to capture the necessary data points:

Table Name Purpose Key Fields
calendar_events Meeting metadata event_id, organizer_hash, attendee_hashes, duration_minutes, event_type
collaboration_data Communication patterns sender_hash, recipient_hashes, timestamp, interaction_type
ai_usage_logs AI tool interactions user_hash, tool_name, prompt_count, session_duration
org_hierarchy Reporting relationships manager_hash, direct_report_hash, department, level

Privacy-Safe Data Ingestion

All personal identifiers must be hashed using SHA-256 with a consistent salt to ensure privacy compliance while maintaining analytical utility. The ingestion process should anonymize data at the source, following GDPR and CCPA requirements.

-- Example hashing function for consistent anonymization
CREATE OR REPLACE FUNCTION `project.dataset.hash_email`(email STRING)
RETURNS STRING
AS (
  TO_HEX(SHA256(CONCAT(email, 'your_salt_here')))
);

Metric 1: One-on-One Meeting Frequency

SQL Implementation

The 1:1 frequency metric calculates the average time between one-on-one meetings for each manager-direct report pair:

CREATE OR REPLACE TABLE `project.dataset.one_on_one_frequency` AS
WITH manager_meetings AS (
  SELECT 
    ce.organizer_hash as manager_hash,
    attendee_hash as direct_report_hash,
    DATE(ce.start_time) as meeting_date,
    ce.duration_minutes
  FROM `project.dataset.calendar_events` ce
  CROSS JOIN UNNEST(ce.attendee_hashes) as attendee_hash
  JOIN `project.dataset.org_hierarchy` oh 
    ON ce.organizer_hash = oh.manager_hash 
    AND attendee_hash = oh.direct_report_hash
  WHERE ce.event_type = 'one_on_one'
    AND ce.status = 'confirmed'
    AND DATE(ce.start_time) >= DATE_SUB(CURRENT_DATE(), INTERVAL 90 DAY)
),
meeting_intervals AS (
  SELECT 
    manager_hash,
    direct_report_hash,
    meeting_date,
    LAG(meeting_date) OVER (
      PARTITION BY manager_hash, direct_report_hash 
      ORDER BY meeting_date
    ) as prev_meeting_date,
    DATE_DIFF(
      meeting_date, 
      LAG(meeting_date) OVER (
        PARTITION BY manager_hash, direct_report_hash 
        ORDER BY meeting_date
      ), 
      DAY
    ) as days_between_meetings
  FROM manager_meetings
)
SELECT 
  manager_hash,
  direct_report_hash,
  AVG(days_between_meetings) as avg_days_between_meetings,
  COUNT(*) as total_meetings_90d,
  -- Score: Lower days between meetings = higher score
  CASE 
    WHEN AVG(days_between_meetings) <= 7 THEN 100
    WHEN AVG(days_between_meetings) <= 14 THEN 80
    WHEN AVG(days_between_meetings) <= 21 THEN 60
    WHEN AVG(days_between_meetings) <= 30 THEN 40
    ELSE 20
  END as one_on_one_score
FROM meeting_intervals
WHERE days_between_meetings IS NOT NULL
GROUP BY manager_hash, direct_report_hash;

Cancellation Rate Analysis

Frequent rescheduling or skipping of 1:1s can erode employee connectedness and increase attrition risk. (Manager Effectiveness: 5 Metrics That Matter More than eSat Scores) The cancellation rate provides additional context to the frequency metric:

CREATE OR REPLACE TABLE `project.dataset.meeting_cancellation_rates` AS
SELECT 
  organizer_hash as manager_hash,
  COUNT(*) as total_scheduled_meetings,
  SUM(CASE WHEN status = 'cancelled' THEN 1 ELSE 0 END) as cancelled_meetings,
  SAFE_DIVIDE(
    SUM(CASE WHEN status = 'cancelled' THEN 1 ELSE 0 END),
    COUNT(*)
  ) * 100 as cancellation_rate_pct,
  -- Score: Lower cancellation rate = higher score
  CASE 
    WHEN SAFE_DIVIDE(SUM(CASE WHEN status = 'cancelled' THEN 1 ELSE 0 END), COUNT(*)) <= 0.05 THEN 100
    WHEN SAFE_DIVIDE(SUM(CASE WHEN status = 'cancelled' THEN 1 ELSE 0 END), COUNT(*)) <= 0.10 THEN 80
    WHEN SAFE_DIVIDE(SUM(CASE WHEN status = 'cancelled' THEN 1 ELSE 0 END), COUNT(*)) <= 0.15 THEN 60
    WHEN SAFE_DIVIDE(SUM(CASE WHEN status = 'cancelled' THEN 1 ELSE 0 END), COUNT(*)) <= 0.20 THEN 40
    ELSE 20
  END as cancellation_score
FROM `project.dataset.calendar_events`
WHERE event_type = 'one_on_one'
  AND DATE(start_time) >= DATE_SUB(CURRENT_DATE(), INTERVAL 90 DAY)
GROUP BY organizer_hash;

Metric 2: Coaching Minutes

Identifying Coaching Sessions

Coaching minutes encompass both dedicated coaching sessions and coaching moments within regular meetings. This metric requires natural language processing of meeting titles and descriptions to identify coaching-related content:

CREATE OR REPLACE TABLE `project.dataset.coaching_minutes` AS
WITH coaching_sessions AS (
  SELECT 
    ce.organizer_hash as manager_hash,
    attendee_hash as direct_report_hash,
    ce.duration_minutes,
    DATE(ce.start_time) as session_date,
    CASE 
      WHEN REGEXP_CONTAINS(LOWER(ce.title), r'coach|mentor|feedback|development|career|growth|skill') THEN 'explicit_coaching'
      WHEN ce.event_type = 'one_on_one' AND ce.duration_minutes >= 30 THEN 'potential_coaching'
      ELSE 'other'
    END as coaching_type
  FROM `project.dataset.calendar_events` ce
  CROSS JOIN UNNEST(ce.attendee_hashes) as attendee_hash
  JOIN `project.dataset.org_hierarchy` oh 
    ON ce.organizer_hash = oh.manager_hash 
    AND attendee_hash = oh.direct_report_hash
  WHERE ce.status = 'confirmed'
    AND DATE(ce.start_time) >= DATE_SUB(CURRENT_DATE(), INTERVAL 90 DAY)
    AND ARRAY_LENGTH(ce.attendee_hashes) <= 3  -- Small group sessions
)
SELECT 
  manager_hash,
  SUM(CASE WHEN coaching_type = 'explicit_coaching' THEN duration_minutes ELSE 0 END) as explicit_coaching_minutes,
  SUM(CASE WHEN coaching_type = 'potential_coaching' THEN duration_minutes * 0.3 ELSE 0 END) as estimated_coaching_minutes,
  SUM(CASE WHEN coaching_type = 'explicit_coaching' THEN duration_minutes ELSE 0 END) + 
  SUM(CASE WHEN coaching_type = 'potential_coaching' THEN duration_minutes * 0.3 ELSE 0 END) as total_coaching_minutes,
  COUNT(DISTINCT direct_report_hash) as direct_reports_coached,
  -- Score based on coaching minutes per direct report
  CASE 
    WHEN SAFE_DIVIDE(
      SUM(CASE WHEN coaching_type = 'explicit_coaching' THEN duration_minutes ELSE 0 END) + 
      SUM(CASE WHEN coaching_type = 'potential_coaching' THEN duration_minutes * 0.3 ELSE 0 END),
      COUNT(DISTINCT direct_report_hash)
    ) >= 120 THEN 100  -- 2+ hours per direct report per quarter
    WHEN SAFE_DIVIDE(
      SUM(CASE WHEN coaching_type = 'explicit_coaching' THEN duration_minutes ELSE 0 END) + 
      SUM(CASE WHEN coaching_type = 'potential_coaching' THEN duration_minutes * 0.3 ELSE 0 END),
      COUNT(DISTINCT direct_report_hash)
    ) >= 90 THEN 80
    WHEN SAFE_DIVIDE(
      SUM(CASE WHEN coaching_type = 'explicit_coaching' THEN duration_minutes ELSE 0 END) + 
      SUM(CASE WHEN coaching_type = 'potential_coaching' THEN duration_minutes * 0.3 ELSE 0 END),
      COUNT(DISTINCT direct_report_hash)
    ) >= 60 THEN 60
    WHEN SAFE_DIVIDE(
      SUM(CASE WHEN coaching_type = 'explicit_coaching' THEN duration_minutes ELSE 0 END) + 
      SUM(CASE WHEN coaching_type = 'potential_coaching' THEN duration_minutes * 0.3 ELSE 0 END),
      COUNT(DISTINCT direct_report_hash)
    ) >= 30 THEN 40
    ELSE 20
  END as coaching_score
FROM coaching_sessions
GROUP BY manager_hash;

Metric 3: AI Tool Adoption

Tracking AI Usage Patterns

Modern workplace analytics platforms integrate with various AI tools to track usage patterns. (Workplace HR Data Integrations) This metric measures a manager's adoption of AI tools as an indicator of their adaptability and forward-thinking approach:

CREATE OR REPLACE TABLE `project.dataset.ai_adoption_scores` AS
WITH ai_usage_summary AS (
  SELECT 
    user_hash as manager_hash,
    tool_name,
    SUM(prompt_count) as total_prompts,
    SUM(session_duration) as total_session_minutes,
    COUNT(DISTINCT DATE(timestamp)) as active_days
  FROM `project.dataset.ai_usage_logs`
  WHERE DATE(timestamp) >= DATE_SUB(CURRENT_DATE(), INTERVAL 90 DAY)
    AND user_hash IN (SELECT DISTINCT manager_hash FROM `project.dataset.org_hierarchy`)
  GROUP BY user_hash, tool_name
),
manager_ai_metrics AS (
  SELECT 
    manager_hash,
    COUNT(DISTINCT tool_name) as unique_tools_used,
    SUM(total_prompts) as total_prompts_90d,
    SUM(total_session_minutes) as total_ai_minutes_90d,
    MAX(active_days) as max_active_days_per_tool,
    AVG(active_days) as avg_active_days_per_tool
  FROM ai_usage_summary
  GROUP BY manager_hash
)
SELECT 
  manager_hash,
  unique_tools_used,
  total_prompts_90d,
  total_ai_minutes_90d,
  avg_active_days_per_tool,
  -- Composite AI adoption score
  LEAST(100, 
    (unique_tools_used * 20) +  -- Up to 5 tools = 100 points
    (LEAST(total_prompts_90d / 10, 30)) +  -- Up to 300 prompts = 30 points
    (LEAST(avg_active_days_per_tool * 2, 20))  -- Up to 10 avg active days = 20 points
  ) as ai_adoption_score
FROM manager_ai_metrics;

Metric 4: Network Centrality

Calculating Organizational Influence

Network centrality measures a manager's position within the organizational communication graph, indicating their ability to connect and elevate their teams. (Manager Effectiveness: 5 Metrics That Matter More than eSat Scores) This metric captures cross-functional collaboration and organizational influence:

CREATE OR REPLACE TABLE `project.dataset.network_centrality` AS
WITH communication_graph AS (
  SELECT 
    sender_hash,
    recipient_hash,
    COUNT(*) as interaction_count,
    COUNT(DISTINCT DATE(timestamp)) as interaction_days
  FROM `project.dataset.collaboration_data`
  CROSS JOIN UNNEST(recipient_hashes) as recipient_hash
  WHERE DATE(timestamp) >= DATE_SUB(CURRENT_DATE(), INTERVAL 90 DAY)
    AND sender_hash != recipient_hash
  GROUP BY sender_hash, recipient_hash
),
manager_connections AS (
  SELECT 
    cg.sender_hash as manager_hash,
    COUNT(DISTINCT cg.recipient_hash) as unique_connections,
    COUNT(DISTINCT oh_recipient.department) as cross_dept_connections,
    COUNT(DISTINCT oh_recipient.level) as cross_level_connections,
    SUM(cg.interaction_count) as total_interactions,
    AVG(cg.interaction_count) as avg_interactions_per_connection
  FROM communication_graph cg
  JOIN `project.dataset.org_hierarchy` oh_sender 
    ON cg.sender_hash = oh_sender.manager_hash
  LEFT JOIN `project.dataset.org_hierarchy` oh_recipient 
    ON cg.recipient_hash = oh_recipient.manager_hash 
    OR cg.recipient_hash = oh_recipient.direct_report_hash
  GROUP BY cg.sender_hash
)
SELECT 
  manager_hash,
  unique_connections,
  cross_dept_connections,
  cross_level_connections,
  total_interactions,
  avg_interactions_per_connection,
  -- Network centrality score based on breadth and depth of connections
  LEAST(100,
    (LEAST(unique_connections / 2, 25)) +  -- Up to 50 connections = 25 points
    (cross_dept_connections * 10) +  -- Cross-department connections = 10 points each
    (cross_level_connections * 5) +  -- Cross-level connections = 5 points each
    (LEAST(avg_interactions_per_connection / 5, 20))  -- Interaction depth = up to 20 points
  ) as network_centrality_score
FROM manager_connections;

Composite Manager Effectiveness Score

Weighted Scoring Model

The final manager effectiveness score combines all four metrics using a weighted approach that reflects their relative importance:

CREATE OR REPLACE TABLE `project.dataset.manager_effectiveness_daily` AS
WITH manager_scores AS (
  SELECT 
    oh.manager_hash,
    oh.department,
    oh.level,
    COUNT(DISTINCT oh.direct_report_hash) as team_size,
    
    -- Metric 1: 1:1 Frequency (30% weight)
    COALESCE(AVG(oof.one_on_one_score), 0) as avg_one_on_one_score,
    COALESCE(mcr.cancellation_score, 100) as cancellation_score,
    (COALESCE(AVG(oof.one_on_one_score), 0) * 0.7 + COALESCE(mcr.cancellation_score, 100) * 0.3) as one_on_one_composite,
    
    -- Metric 2: Coaching Minutes (25% weight)
    COALESCE(cm.coaching_score, 0) as coaching_score,
    
    -- Metric 3: AI Adoption (20% weight)
    COALESCE(aas.ai_adoption_score, 0) as ai_adoption_score,
    
    -- Metric 4: Network Centrality (25% weight)
    COALESCE(nc.network_centrality_score, 0) as network_centrality_score
    
  FROM `project.dataset.org_hierarchy` oh
  LEFT JOIN `project.dataset.one_on_one_frequency` oof 
    ON oh.manager_hash = oof.manager_hash
  LEFT JOIN `project.dataset.meeting_cancellation_rates` mcr 
    ON oh.manager_hash = mcr.manager_hash
  LEFT JOIN `project.dataset.coaching_minutes` cm 
    ON oh.manager_hash = cm.manager_hash
  LEFT JOIN `project.dataset.ai_adoption_scores` aas 
    ON oh.manager_hash = aas.manager_hash
  LEFT JOIN `project.dataset.network_centrality` nc 
    ON oh.manager_hash = nc.manager_hash
  WHERE oh.manager_hash IS NOT NULL
  GROUP BY oh.manager_hash, oh.department, oh.level, mcr.cancellation_score, cm.coaching_score, aas.ai_adoption_score, nc.network_centrality_score
)
SELECT 
  manager_hash,
  department,
  level,
  team_size,
  
  -- Individual metric scores
  one_on_one_composite,
  coaching_score,
  ai_adoption_score,
  network_centrality_score,
  
  -- Weighted composite score
  ROUND(
    (one_on_one_composite * 0.30) +
    (coaching_score * 0.25) +
    (ai_adoption_score * 0.20) +
    (network_centrality_score * 0.25),
    1
  ) as manager_effectiveness_score,
  
  -- Performance tier
  CASE 
    WHEN ROUND(
      (one_on_one_composite * 0.30) +
      (coaching_score * 0.25) +
      (ai_adoption_score * 0.20) +
      (network_centrality_score * 0.25),
      1
    ) >= 80 THEN 'Excellent'
    WHEN ROUND(
      (one_on_one_composite * 0.30) +
      (coaching_score * 0.25) +
      (ai_adoption_score * 0.20) +
      (network_centrality_score * 0.25),
      1
    ) >= 65 THEN 'Good'
    WHEN ROUND(
      (one_on_one_composite * 0.30) +
      (coaching_score * 0.25) +
      (ai_adoption_score * 0.20) +
      (network_centrality_score * 0.25),
      1
    ) >= 50 THEN 'Needs Improvement'
    ELSE 'Critical'
  END as performance_tier,
  
  CURRENT_DATE() as calculation_date
FROM manager_scores;

dbt Implementation

Model Structure

For teams us

Frequently Asked Questions

What metrics should be included in a manager-effectiveness score?

Key metrics include 1:1 frequency and cancellation rates, team engagement levels, workday intensity patterns, and collaboration network strength. These real-time indicators provide better insights than traditional lagging metrics like employee satisfaction surveys, as they measure actual managerial behaviors that drive team performance.

How does Google BigQuery help measure manager effectiveness?

BigQuery enables real-time analysis of large-scale workplace data from multiple sources like calendar systems, communication tools, and project management platforms. It allows you to create reproducible SQL queries that aggregate behavioral data into meaningful manager-effectiveness scores, providing actionable insights for leadership development.

Why are traditional eSat scores insufficient for measuring manager effectiveness?

eSat scores are lagging indicators that only capture employee sentiment after problems have already occurred. They don't provide real-time assessment of managerial behaviors like coaching frequency, team support, or connection facilitation that actually drive performance and retention.

What role does 1:1 meeting frequency play in manager effectiveness scoring?

1:1 frequency and cancellation rates are highly correlated with positive team outcomes and employee retention. Managers who consistently hold regular one-on-ones and rarely cancel them demonstrate commitment to team development, making this a critical component of any effectiveness score.

How can Google Calendar analytics improve manager effectiveness measurement?

Google Calendar analytics reveals meeting patterns, 1:1 consistency, and time allocation that directly impact team productivity. By analyzing calendar data through platforms like Worklytics, organizations can identify managers who effectively balance team meetings, focus time, and individual coaching sessions.

What data sources are needed to build a comprehensive manager-effectiveness score?

Essential data sources include calendar systems (Google Calendar, Outlook), communication platforms (Slack, Teams), project management tools (Asana, Jira), and HR systems. Integrating these sources in BigQuery allows for holistic analysis of managerial behaviors across all work touchpoints.

Sources

1. https://www.worklytics.co/blog/manager-effectiveness-5-metrics-that-matter-more-than-esat-scores
2. https://www.worklytics.co/blog/measure-leadership-performance-with-real-data
3. https://www.worklytics.co/integrations