Accounting
Variance analysis: the definitive guide to explaining the numbers without the grind
Written by

The Maxima Team
Every close cycle, accounting teams ask the same question: what changed, and why? Variance analysis is where accountants become storytellers. The challenge is that the story has to be audit-ready, backed by evidence, and delivered before anyone else in the company sees the financials. The math is simple. The explanations are not.
Variance analysis and flux analysis are two terms for the same discipline. Strictly speaking, variance analysis compares actuals to a budget or forecast, while flux analysis compares actuals across time periods. In practice, most controllers do both during close and refer to the entire exercise as flux. Most teams still perform this work by exporting trial balance data into a spreadsheet, manually comparing periods, scanning for large movements, and writing explanations one account at a time. For a company with 200 GL accounts across multiple entities, that process can consume one to two full days of the close window. The issue is not that variance analysis is inherently complex. It is that most teams are still doing the preparation manually. In an AI-native workflow, the system prepares the first draft of the work, and humans review what matters.
This guide covers how to set materiality thresholds that actually work, what good variance explanations look like with real numbers, the evidence auditors expect, and how to build a review queue that routes only material variances to humans instead of forcing someone to explain every account.
What to compare and how often
Not every comparison serves the same purpose. The table below outlines the most common approaches, what each one reveals, and when it is most useful.
A common gap is that many accounting teams skip balance sheet flux as part of their monthly close. Income statement flux gets the attention because revenue and expenses are what leadership asks about. But balance sheet accounts are where errors hide. A prepaid balance that grows every month without explanation, a receivable that never collects, or an accrued liability that reverses and re-accrues in the same amount month after month are all signals that something deeper is wrong. Both income statement and balance sheet accounts should be included in the standard flux process.
Which accounts matter most? Revenue, payroll, and intercompany accounts deserve tighter scrutiny because they are high visibility, high volume, or high risk. New accounts, unusual accounts, and any account that changed materially in the prior period should also get attention. The challenge is making those decisions consistently, without relying on individual judgment each month.
Setting materiality thresholds that actually work
Materiality thresholds determine which variances require investigation and which can be documented without further analysis. Get them wrong in either direction and the process breaks. Thresholds that are too tight flag dozens of immaterial accounts and consume time on explanations nobody reads. Thresholds that are too loose allow real errors and trends to slip through.
The most effective approach is a dual threshold: a dollar amount and a percentage, with review triggered when either is exceeded. For example, a threshold of $25,000 or 10% means a $30,000 variance on a $2 million revenue line still gets reviewed because it exceeds the dollar threshold, while a $5,000 variance on a $15,000 travel account also gets reviewed because it exceeds the percentage threshold. This approach ensures material movements are caught regardless of which dimension triggers them.
Not all accounts should share the same thresholds. Revenue, payroll, and intercompany accounts warrant tighter thresholds because errors in these areas carry higher risk or visibility. Discretionary operating expenses can tolerate wider thresholds because the consequences of a missed variance are lower.
Account Category | Illustrative $ Threshold | Suggested % Threshold | Rationale |
|---|---|---|---|
Revenue | $50,000 | 5% | High visibility, audit focus, direct P&L impact |
Cost of Goods Sold | $50,000 | 10% | Direct margin impact, but higher natural variability |
Payroll & Benefits | $25,000 | 5% | Large cost base, headcount-driven, easy to validate |
Operating Expenses | $25,000 | 10% | Moderate risk, broad category |
Intercompany | $25,000 | 5% | Elimination risk, multi-entity coordination |
Balance Sheet (Cash, AR, AP) | $50,000 | 10% | High-volume, but usually reconciled separately |
These are starting points. The right thresholds depend on company size, the controller’s risk appetite, and auditor expectations. A useful calibration method is to set internal flux thresholds slightly below the auditor’s materiality level for the relevant financial statement line. That way, explanations are already prepared when audit fieldwork begins.
Numerical thresholds are necessary but not sufficient. SEC Staff Accounting Bulletin No. 99 makes clear that exclusive reliance on quantitative benchmarks is not appropriate. A variance can be small in dollar terms and still matter if it involves a related party, masks an offsetting error, affects a covenant, or represents a meaningful trend. Strong frameworks combine quantitative triggers with qualitative judgment.
One principle that many threshold-based approaches miss is that sometimes the signal is not a variance that exceeds the threshold, but one that should have appeared and did not. Consider a quarterly tax payment that posts every three months. If it fails to post in the expected period, period-over-period comparisons may show no variance at all. The issue is only visible when expected activity is tracked against a known cadence. The best processes include an expectations layer that flags missing activity, not just abnormal movement.
Three variances you would actually see at close
Textbook variance analysis uses clean, hypothetical numbers. Real variance analysis involves messy operational context, judgment calls about whether an entry is needed, and evidence that lives outside the GL. Here are three scenarios drawn from common close situations.
Example 1: revenue decline that is not actually a decline
Revenue is $180,000 lower than prior month. The sales team says nothing changed. Pipeline looks normal. The immediate assumption is that the business is softening.
Investigation: Pulling the detail by customer reveals that the largest customer's revenue dropped $210,000 while all other customers collectively increased $30,000. Drilling into that customer, the picture becomes clear. Last month included a cumulative catch-up adjustment under ASC 606. The customer had amended their contract midway through the prior quarter, and the modification was treated as part of the existing contract under ASC 606-10-25-13(b), which required reallocating the transaction price across remaining performance obligations. That reallocation generated a $210,000 catch-up recognized entirely in the prior month. This month is the first clean month at the new contracted rate.
Both months are correct. No journal entry is needed. But without drilling into the customer-level detail, reviewing the contract amendment, and understanding the ASC 606 modification treatment, a controller would report that revenue declined. The real story is the opposite: the business is stable, and the prior month was the anomaly.
Evidence: contract amendment with effective date, ASC 606 modification analysis showing reallocation of transaction price, revenue recognition schedule showing the catch-up entry and the forward run-rate, customer-level revenue detail for both periods.
The narrative: "Revenue decreased $180K MoM. Driven by $210K non-recurrence of an ASC 606 cumulative catch-up adjustment recognized in the prior month for [Customer] contract modification (ASC 606-10-25-13(b)). Current month reflects the new contracted run-rate. Underlying revenue excluding catch-up increased $30K. No adjustment required. Contract amendment and revenue schedule attached."
Example 2: opex variance from annual license reclassification
Professional services expense is $92,000 higher than prior month. Prior month: $340,000. Current month: $432,000. The variance is 27%, well above typical thresholds.
Investigation: the increase traces primarily to a single vendor. The annual software license for a project management tool renewed in the current month, and the vendor switched from monthly to annual billing without advance notice. The full annual amount ($85,000) was expensed in one month. Prior months had carried this cost at $7,083 per month, making the vendor-specific increase $77,917.
This is not an increase in spending. It is a prepayment that should be amortized over the license term. Under ASC 340-10, costs paid in advance for services to be received over future periods should be recognized as prepaid assets and amortized as the benefit is consumed.
Adjusting journal entry:
Date | Account | Debit | Credit |
|---|---|---|---|
Mar 31 | Prepaid Expenses (1350) | $77,917 | |
Mar 31 | Professional Services Expense (640000) | $77,917 |
The reclassification moves 11 months of the license ($85,000 less one month at $7,083 = $77,917) to prepaid. The remaining $7,083 stays in professional services as the current month's expense. Going forward, $7,083 amortizes monthly from prepaid to expense. After the reclassification, the MoM variance drops from $92,000 (27%) to $14,083 (4%), which falls below typical thresholds.
Evidence: vendor invoice showing annual billing, prior-year invoice showing monthly billing, contract amendment or renewal terms, amortization schedule, JE approval.
Example 3: payroll variance that requires operational data
Salary expense is $47,000 lower than prior month. Prior month: $812,000. Current month: $765,000. The variance is (5.8%), just over the 5% threshold for payroll.
Investigation: the headcount report shows a net decrease of three FTEs in the finance department. Two departures occurred mid-month with no overlap from replacement hires, and one open role from the prior month remained unfilled. The $47,000 decrease is driven by partial-month vacancy across these positions.
No journal entry is needed. Replacement hiring is underway, and the run-rate is expected to increase as new employees onboard, though not necessarily to the prior level since replacement salaries may differ from departing employees.
This example illustrates why variance analysis requires operational data, not just GL data. The GL shows salary expense decreased. It does not tell you why. Answering "why" requires pulling the payroll register to see who was paid, for how many days, and what changed from last month. A controller who explains this variance without referencing the payroll detail is guessing. A controller who attaches the evidence is proving it.
Evidence: payroll register for both periods showing active employees and pay dates, headcount report showing current vs. prior month FTEs.
The narrative: "Salary expense decreased $47K (5.8%) MoM. Net decrease of 3 FTEs in the finance department due to mid-month departures and open roles not yet backfilled. Replacement hiring underway. Payroll register and headcount report attached."
The variance narrative: a template that works
The most time-consuming part of variance analysis is not finding the variance. It is writing the explanation. And the most common failure mode is vague explanations that raise more questions than they answer. "Timing differences" is not an explanation. "Normal operating fluctuations" is not an explanation. These are placeholders that get copied forward month after month until nobody remembers what they originally referred to.
A strong explanation covers six elements in a structured format:
Field | What It Covers | Example |
|---|---|---|
Account and period | Which account, which comparison | 640000 Professional Services, Mar vs. Prior Month |
Variance amount | Both $ and % | $92,000 unfavorable (27%) |
Root cause category | Volume, price, timing, one-time, error, operational | One-time: annual license billed in single month |
Narrative | 2-3 sentences: what happened, why, and whether it recurs | Annual software license renewed at $85K. Vendor switched from monthly to annual billing. Reclassified $78K to prepaid per ASC 340-10. Remaining $7K is normal monthly run-rate. Non-recurring. |
Evidence | What is attached | Vendor invoice, prior-year invoice, amortization schedule, JE approval |
Owner | Who prepared and who reviewed | Prepared: [Name]. Reviewed: [Name]. |
The test is simple: a reviewer who was not involved in the investigation should be able to read the explanation, review the evidence, and sign off without asking a follow-up question. If they still ask “why did this change?”, the explanation is incomplete.
One additional field worth including: "Does this recur?" Many variances are one-time (a vendor billing change, a conference that happens once a year, a signing bonus for a new hire). Others are structural (a new contract that permanently increases a cost line, a headcount addition that raises payroll going forward). Flagging recurrence up front saves the same investigation from happening next month.
Evidence requirements by variance type
A variance explanation without evidence is an assertion. Auditors, reviewers, and SOX testing teams expect support that is specific to the type of variance being explained. The evidence for a revenue shortfall is different from the evidence for a payroll overspend.
Revenue variances: Signed contracts or order confirmations, invoices, revenue recognition schedule or waterfall, ASC 606 policy memo (if recognition timing is at issue), cutoff analysis showing delivery or service completion dates.
Headcount and payroll variances: Payroll register for both periods showing active employees, pay rates, and pay dates. Headcount report showing current vs. prior month FTEs by department.
Operating expense variances: Vendor invoices, purchase order approvals, contract amendments or renewal terms, reclassification support (if expense was moved between accounts), usage metrics from the platform or service (license utilization reports, API usage, storage consumption). For software and SaaS expenses specifically, the usage report matters as much as the invoice. A $92,000 license renewal means something different if the tool is used by 400 people versus 12.
Intercompany variances: Intercompany invoices or transaction detail from both entities showing what drove the movement between periods.
Accrual variances: Supporting calculation with assumptions, and prior period comparison showing the accrual pattern.
One-time or unusual items: Documentation that supports both what the item is and why it is non-recurring.
The common thread: evidence should arrive with the variance, not after the reviewer asks for it. Teams that write the explanation first and hunt for evidence later are the ones sending frantic emails on day 4 of close asking AP for a copy of an invoice from two months ago.
Building a review queue (instead of reviewing everything)
If your senior accountant or controller reviews every account's variance explanation personally, you have built a process that does not scale. The solution is a review queue that routes variances based on materiality, risk, and whether the explanation requires judgment. The difference between a variance process and a variance queue: one calculates change, the other manages attention.
Tier 1: Auto-document, no review required. The variance is below both the dollar and percentage thresholds. No journal entry is needed. The system (or the preparer) documents the variance amount and a standard note ("Below materiality threshold, no investigation required"). The reviewer does not need to see this.
Tier 2: Preparer documents, reviewer skims. The variance exceeds one threshold but the root cause is known and recurring. Examples: a seasonal marketing campaign that spikes every Q4, a payroll increase from annual merit raises that were budgeted at a different effective date, or a hosting cost increase from a contract renewal that was anticipated. The preparer writes the explanation and attaches evidence. The reviewer confirms the explanation is reasonable without conducting an independent investigation.
Tier 3: Full review. The variance exceeds both thresholds, OR it is the first occurrence of this variance, OR it involves a sensitive account, OR it requires a journal entry, OR it is an expected variance that did not appear. These get the controller's full attention: the explanation is reviewed, the evidence is inspected, and any journal entries are approved before sign-off.
Routing Criterion | Tier 1 (Auto-doc) | Tier 2 (Skim) | Tier 3 (Full Review) |
|---|---|---|---|
Below both thresholds | Yes | ||
Exceeds one threshold, known/recurring cause | Yes | ||
Exceeds both thresholds | Yes | ||
First occurrence | Yes | ||
Journal entry required | Yes | ||
Expected variance absent | Yes |
The benefit of this approach is focus. If a company has 200 accounts, a well-calibrated threshold matrix might flag 40 for Tier 2 and 15 for Tier 3. The controller reviews 15 accounts deeply instead of 200 superficially. The quality of those 15 reviews goes up. The time spent on the other 185 goes down. And the documentation trail proves that every account was assessed, even the ones below threshold.
A useful calibration check: if more than 50% of accounts are hitting Tier 3 every month, thresholds are too tight. If fewer than 5% are hitting Tier 3, thresholds may be too loose, or the business is unusually stable, or the thresholds have not been updated as the company has grown. Review thresholds quarterly, and coordinate with your auditors during planning discussions.
What changes at month-end
Variance analysis during close is harder than variance analysis at any other time, for three reasons.
First, the numbers keep moving. Late journal entries, accruals posted in the last few days of close, and reclassifications change the trial balance underneath the variance analysis. An explanation written on Monday morning may be wrong by Tuesday afternoon because someone posted a $60,000 accrual to the same account. Teams that start flux before the books are fully closed gain speed but risk rework. Teams that wait until every entry is posted lose two days of the close window.
Second, the evidence is scattered. The GL tells you what happened. It rarely tells you why. Explaining a payroll variance requires the headcount report. Explaining an OpEx spike requires the vendor invoice and sometimes a conversation with the department head who approved the purchase. Every explanation that requires cross-functional evidence adds latency to the close.
Third, the quality of explanations degrades under time pressure. A controller with eight hours to explain 200 accounts will write thorough, evidence-backed narratives for the first 30 and increasingly terse notes for the rest. The last 50 accounts get "timing" or "normal fluctuation" or last month's explanation copied forward. This is where stale flux commentary accumulates, and it is what auditors notice first.
How this works in NetSuite
For teams running on NetSuite, the variance analysis workflow tends to follow the same pattern every month. Someone runs a comparative financial report or a saved search to pull the trial balance. The data gets exported into Excel. Columns are added for dollar change, percentage change, and explanation. The accountant then filters for large movements, opens a separate tab to pull transaction detail for each flagged account, and starts writing.
The problem is not that NetSuite lacks the data. NetSuite's Financial Report Builder can produce period-over-period comparisons, and saved searches can surface transaction-level detail by account, subsidiary, department, class, or location. The data is there. The problem is that the explanation and review layer sits outside the system.
Once the export hits Excel, version control breaks down. One preparer downloads detail at 9:00 AM. Another pulls a refreshed report after a late journal entry posts at 1:00 PM. A reviewer comments on an explanation that was built on numbers that have since changed. Someone pastes a summary into a slide deck, and the audit trail gets thinner with every handoff.
The other friction point is dimensionality. A controller who sees that marketing expense increased $400K cannot stop at the account total. They need to pivot by vendor to see that one agency drove 80% of the movement. They need to pivot by department to see whether the spend was concentrated in one business unit or spread across the company. In NetSuite, that means either running multiple saved searches or building a voluminous custom report and manually pivoting and filtering through it. Either way, the process is cumbersome, and each iteration adds time and room for error.
For NetSuite teams specifically, a stronger variance process should keep the source data current rather than frozen at the point of export. It should preserve the native dimensions teams already use (subsidiary, department, location, class, vendor, customer) without requiring manual reshaping. And when a reviewer questions an explanation, the evidence should be one click away rather than buried in a spreadsheet someone saved to their desktop three days ago. Maxima's native NetSuite integration addresses this directly, running flux analysis on top of NetSuite's data model rather than alongside it in a spreadsheet.
What a better workflow looks like
What changes with agentic AI is that variance analysis no longer starts with a blank spreadsheet and a manual hunt for drivers. The work is prepared before the review begins. Instead of calculating differences, pulling transaction detail, and writing explanations from scratch, the system surfaces what actually matters. Material variances are already identified. The underlying drivers are already broken down. Draft explanations are already written with supporting evidence attached. The accountant’s role shifts from assembling the analysis to evaluating it.
This is where platforms like Maxima come in. In practice, a controller opens to a list of accounts that have already been flagged. Data syncs directly from NetSuite and other ERPs and refreshes continuously, so flux analysis can begin before the books are fully closed. Materiality thresholds are configured by report and by account, using dollar thresholds, percentage thresholds, or both. Accounts that breach those thresholds are already waiting. The rest are documented and out of the way.
Instead of reconstructing why hosting expense moved $1.34 million, the controller opens to an explanation that already names the drivers, quantifies their impact, and ties the movement to specific transactions. “Hosting expense increased $1.34 million, driven by $1.37 million in retroactive Microsoft Azure billings for April through July posted in November” is an explanation a reviewer can act on. “Hosting expense increased” is not. From there, the investigation happens inside the workflow. The controller reviews the explanation, checks the linked transactions, and decides whether the logic holds. Last month’s explanation sits alongside the current one, making it clear whether the same root cause applies or something new has changed. If needed, they can go deeper by breaking the variance down across vendor, department, location, or product. Every number traces back to the underlying transactions, with direct links into the source system.
The workflow also surfaces what thresholds alone miss. Accrual reversals without re-accruals, duplicate entries, round-number anomalies, and balances that changed after the analysis are flagged automatically. The question shifts from “what changed?” to “is this correct?” Once the explanation is reviewed, adjusted, or approved, the audit trail captures each step. Every explanation follows a consistent structure, regardless of who prepared it or how much time pressure they were under. The output is not generated text. It is prepared accounting work.
The goal is not to remove accountants from variance analysis. It is to remove the manual assembly work so they can focus on what actually requires judgment: validating the drivers, identifying real issues, and ensuring the financials tell the right story. Variance analysis is not where accounting teams should spend their time. It is where they should apply their judgment. The difference is whether the work is prepared for them or assembled by them.
Variance analysis should not start with a blank spreadsheet. See how Maxima prepares explanations, traces movements to underlying transactions, and streamlines close review.
Frequently asked questions
How do you set materiality thresholds for variance analysis?
Use a dual threshold: a dollar amount and a percentage, with investigation triggered when either is exceeded. Common starting points are $25,000 or 10% for operating accounts, with tighter thresholds for revenue, payroll, and intercompany accounts. The right thresholds depend on the size of the company, the risk appetite of the controller, and what the external auditors expect. A useful rule: set your internal thresholds slightly below the auditor's materiality level so explanations are ready before audit fieldwork begins.
How do you write a good variance explanation?
A good explanation covers six fields: what account and period, what is the dollar and percentage variance, what is the root cause, what evidence supports it, who owns it, and does the variance recur. Avoid vague language like "timing differences" or "normal fluctuations" without specifying what the timing difference is or what makes the fluctuation normal. The test: a reviewer should be able to sign off without asking a follow-up question.
What evidence is needed to support a variance explanation?
It depends on the variance type. Revenue variances need contracts, invoices, and delivery documentation. Payroll variances need payroll registers and headcount reports. OpEx variances need vendor invoices, contract terms, and usage metrics. The common thread: the evidence should be specific to the root cause, not a generic data dump. Attach it with the explanation, not after the reviewer asks for it.
What happens when a variance investigation reveals an error?
If investigation uncovers a posting error, a missed accrual, or a misclassification, a correcting journal entry is prepared and posted before the financial statements are finalized. The variance explanation should document both the error and the correction, including the JE reference. This is one of the most valuable functions of variance analysis: it serves as a detective control that catches errors before they reach external stakeholders.
Can variance analysis be done before the books are fully closed?
Yes, and teams that start early gain a significant advantage. Most accounts stabilize within the first day or two of close, and those variances can be investigated immediately. The risk is that late entries may change balances after explanations are written. The mitigation is a system that flags which accounts had post-flux balance changes, so preparers can update only the affected explanations rather than starting over.
Every close cycle, accounting teams ask the same question: what changed, and why? Variance analysis is where accountants become storytellers. The challenge is that the story has to be audit-ready, backed by evidence, and delivered before anyone else in the company sees the financials. The math is simple. The explanations are not.
Variance analysis and flux analysis are two terms for the same discipline. Strictly speaking, variance analysis compares actuals to a budget or forecast, while flux analysis compares actuals across time periods. In practice, most controllers do both during close and refer to the entire exercise as flux. Most teams still perform this work by exporting trial balance data into a spreadsheet, manually comparing periods, scanning for large movements, and writing explanations one account at a time. For a company with 200 GL accounts across multiple entities, that process can consume one to two full days of the close window. The issue is not that variance analysis is inherently complex. It is that most teams are still doing the preparation manually. In an AI-native workflow, the system prepares the first draft of the work, and humans review what matters.
This guide covers how to set materiality thresholds that actually work, what good variance explanations look like with real numbers, the evidence auditors expect, and how to build a review queue that routes only material variances to humans instead of forcing someone to explain every account.
What to compare and how often
Not every comparison serves the same purpose. The table below outlines the most common approaches, what each one reveals, and when it is most useful.
A common gap is that many accounting teams skip balance sheet flux as part of their monthly close. Income statement flux gets the attention because revenue and expenses are what leadership asks about. But balance sheet accounts are where errors hide. A prepaid balance that grows every month without explanation, a receivable that never collects, or an accrued liability that reverses and re-accrues in the same amount month after month are all signals that something deeper is wrong. Both income statement and balance sheet accounts should be included in the standard flux process.
Which accounts matter most? Revenue, payroll, and intercompany accounts deserve tighter scrutiny because they are high visibility, high volume, or high risk. New accounts, unusual accounts, and any account that changed materially in the prior period should also get attention. The challenge is making those decisions consistently, without relying on individual judgment each month.
Setting materiality thresholds that actually work
Materiality thresholds determine which variances require investigation and which can be documented without further analysis. Get them wrong in either direction and the process breaks. Thresholds that are too tight flag dozens of immaterial accounts and consume time on explanations nobody reads. Thresholds that are too loose allow real errors and trends to slip through.
The most effective approach is a dual threshold: a dollar amount and a percentage, with review triggered when either is exceeded. For example, a threshold of $25,000 or 10% means a $30,000 variance on a $2 million revenue line still gets reviewed because it exceeds the dollar threshold, while a $5,000 variance on a $15,000 travel account also gets reviewed because it exceeds the percentage threshold. This approach ensures material movements are caught regardless of which dimension triggers them.
Not all accounts should share the same thresholds. Revenue, payroll, and intercompany accounts warrant tighter thresholds because errors in these areas carry higher risk or visibility. Discretionary operating expenses can tolerate wider thresholds because the consequences of a missed variance are lower.
Account Category | Illustrative $ Threshold | Suggested % Threshold | Rationale |
|---|---|---|---|
Revenue | $50,000 | 5% | High visibility, audit focus, direct P&L impact |
Cost of Goods Sold | $50,000 | 10% | Direct margin impact, but higher natural variability |
Payroll & Benefits | $25,000 | 5% | Large cost base, headcount-driven, easy to validate |
Operating Expenses | $25,000 | 10% | Moderate risk, broad category |
Intercompany | $25,000 | 5% | Elimination risk, multi-entity coordination |
Balance Sheet (Cash, AR, AP) | $50,000 | 10% | High-volume, but usually reconciled separately |
These are starting points. The right thresholds depend on company size, the controller’s risk appetite, and auditor expectations. A useful calibration method is to set internal flux thresholds slightly below the auditor’s materiality level for the relevant financial statement line. That way, explanations are already prepared when audit fieldwork begins.
Numerical thresholds are necessary but not sufficient. SEC Staff Accounting Bulletin No. 99 makes clear that exclusive reliance on quantitative benchmarks is not appropriate. A variance can be small in dollar terms and still matter if it involves a related party, masks an offsetting error, affects a covenant, or represents a meaningful trend. Strong frameworks combine quantitative triggers with qualitative judgment.
One principle that many threshold-based approaches miss is that sometimes the signal is not a variance that exceeds the threshold, but one that should have appeared and did not. Consider a quarterly tax payment that posts every three months. If it fails to post in the expected period, period-over-period comparisons may show no variance at all. The issue is only visible when expected activity is tracked against a known cadence. The best processes include an expectations layer that flags missing activity, not just abnormal movement.
Three variances you would actually see at close
Textbook variance analysis uses clean, hypothetical numbers. Real variance analysis involves messy operational context, judgment calls about whether an entry is needed, and evidence that lives outside the GL. Here are three scenarios drawn from common close situations.
Example 1: revenue decline that is not actually a decline
Revenue is $180,000 lower than prior month. The sales team says nothing changed. Pipeline looks normal. The immediate assumption is that the business is softening.
Investigation: Pulling the detail by customer reveals that the largest customer's revenue dropped $210,000 while all other customers collectively increased $30,000. Drilling into that customer, the picture becomes clear. Last month included a cumulative catch-up adjustment under ASC 606. The customer had amended their contract midway through the prior quarter, and the modification was treated as part of the existing contract under ASC 606-10-25-13(b), which required reallocating the transaction price across remaining performance obligations. That reallocation generated a $210,000 catch-up recognized entirely in the prior month. This month is the first clean month at the new contracted rate.
Both months are correct. No journal entry is needed. But without drilling into the customer-level detail, reviewing the contract amendment, and understanding the ASC 606 modification treatment, a controller would report that revenue declined. The real story is the opposite: the business is stable, and the prior month was the anomaly.
Evidence: contract amendment with effective date, ASC 606 modification analysis showing reallocation of transaction price, revenue recognition schedule showing the catch-up entry and the forward run-rate, customer-level revenue detail for both periods.
The narrative: "Revenue decreased $180K MoM. Driven by $210K non-recurrence of an ASC 606 cumulative catch-up adjustment recognized in the prior month for [Customer] contract modification (ASC 606-10-25-13(b)). Current month reflects the new contracted run-rate. Underlying revenue excluding catch-up increased $30K. No adjustment required. Contract amendment and revenue schedule attached."
Example 2: opex variance from annual license reclassification
Professional services expense is $92,000 higher than prior month. Prior month: $340,000. Current month: $432,000. The variance is 27%, well above typical thresholds.
Investigation: the increase traces primarily to a single vendor. The annual software license for a project management tool renewed in the current month, and the vendor switched from monthly to annual billing without advance notice. The full annual amount ($85,000) was expensed in one month. Prior months had carried this cost at $7,083 per month, making the vendor-specific increase $77,917.
This is not an increase in spending. It is a prepayment that should be amortized over the license term. Under ASC 340-10, costs paid in advance for services to be received over future periods should be recognized as prepaid assets and amortized as the benefit is consumed.
Adjusting journal entry:
Date | Account | Debit | Credit |
|---|---|---|---|
Mar 31 | Prepaid Expenses (1350) | $77,917 | |
Mar 31 | Professional Services Expense (640000) | $77,917 |
The reclassification moves 11 months of the license ($85,000 less one month at $7,083 = $77,917) to prepaid. The remaining $7,083 stays in professional services as the current month's expense. Going forward, $7,083 amortizes monthly from prepaid to expense. After the reclassification, the MoM variance drops from $92,000 (27%) to $14,083 (4%), which falls below typical thresholds.
Evidence: vendor invoice showing annual billing, prior-year invoice showing monthly billing, contract amendment or renewal terms, amortization schedule, JE approval.
Example 3: payroll variance that requires operational data
Salary expense is $47,000 lower than prior month. Prior month: $812,000. Current month: $765,000. The variance is (5.8%), just over the 5% threshold for payroll.
Investigation: the headcount report shows a net decrease of three FTEs in the finance department. Two departures occurred mid-month with no overlap from replacement hires, and one open role from the prior month remained unfilled. The $47,000 decrease is driven by partial-month vacancy across these positions.
No journal entry is needed. Replacement hiring is underway, and the run-rate is expected to increase as new employees onboard, though not necessarily to the prior level since replacement salaries may differ from departing employees.
This example illustrates why variance analysis requires operational data, not just GL data. The GL shows salary expense decreased. It does not tell you why. Answering "why" requires pulling the payroll register to see who was paid, for how many days, and what changed from last month. A controller who explains this variance without referencing the payroll detail is guessing. A controller who attaches the evidence is proving it.
Evidence: payroll register for both periods showing active employees and pay dates, headcount report showing current vs. prior month FTEs.
The narrative: "Salary expense decreased $47K (5.8%) MoM. Net decrease of 3 FTEs in the finance department due to mid-month departures and open roles not yet backfilled. Replacement hiring underway. Payroll register and headcount report attached."
The variance narrative: a template that works
The most time-consuming part of variance analysis is not finding the variance. It is writing the explanation. And the most common failure mode is vague explanations that raise more questions than they answer. "Timing differences" is not an explanation. "Normal operating fluctuations" is not an explanation. These are placeholders that get copied forward month after month until nobody remembers what they originally referred to.
A strong explanation covers six elements in a structured format:
Field | What It Covers | Example |
|---|---|---|
Account and period | Which account, which comparison | 640000 Professional Services, Mar vs. Prior Month |
Variance amount | Both $ and % | $92,000 unfavorable (27%) |
Root cause category | Volume, price, timing, one-time, error, operational | One-time: annual license billed in single month |
Narrative | 2-3 sentences: what happened, why, and whether it recurs | Annual software license renewed at $85K. Vendor switched from monthly to annual billing. Reclassified $78K to prepaid per ASC 340-10. Remaining $7K is normal monthly run-rate. Non-recurring. |
Evidence | What is attached | Vendor invoice, prior-year invoice, amortization schedule, JE approval |
Owner | Who prepared and who reviewed | Prepared: [Name]. Reviewed: [Name]. |
The test is simple: a reviewer who was not involved in the investigation should be able to read the explanation, review the evidence, and sign off without asking a follow-up question. If they still ask “why did this change?”, the explanation is incomplete.
One additional field worth including: "Does this recur?" Many variances are one-time (a vendor billing change, a conference that happens once a year, a signing bonus for a new hire). Others are structural (a new contract that permanently increases a cost line, a headcount addition that raises payroll going forward). Flagging recurrence up front saves the same investigation from happening next month.
Evidence requirements by variance type
A variance explanation without evidence is an assertion. Auditors, reviewers, and SOX testing teams expect support that is specific to the type of variance being explained. The evidence for a revenue shortfall is different from the evidence for a payroll overspend.
Revenue variances: Signed contracts or order confirmations, invoices, revenue recognition schedule or waterfall, ASC 606 policy memo (if recognition timing is at issue), cutoff analysis showing delivery or service completion dates.
Headcount and payroll variances: Payroll register for both periods showing active employees, pay rates, and pay dates. Headcount report showing current vs. prior month FTEs by department.
Operating expense variances: Vendor invoices, purchase order approvals, contract amendments or renewal terms, reclassification support (if expense was moved between accounts), usage metrics from the platform or service (license utilization reports, API usage, storage consumption). For software and SaaS expenses specifically, the usage report matters as much as the invoice. A $92,000 license renewal means something different if the tool is used by 400 people versus 12.
Intercompany variances: Intercompany invoices or transaction detail from both entities showing what drove the movement between periods.
Accrual variances: Supporting calculation with assumptions, and prior period comparison showing the accrual pattern.
One-time or unusual items: Documentation that supports both what the item is and why it is non-recurring.
The common thread: evidence should arrive with the variance, not after the reviewer asks for it. Teams that write the explanation first and hunt for evidence later are the ones sending frantic emails on day 4 of close asking AP for a copy of an invoice from two months ago.
Building a review queue (instead of reviewing everything)
If your senior accountant or controller reviews every account's variance explanation personally, you have built a process that does not scale. The solution is a review queue that routes variances based on materiality, risk, and whether the explanation requires judgment. The difference between a variance process and a variance queue: one calculates change, the other manages attention.
Tier 1: Auto-document, no review required. The variance is below both the dollar and percentage thresholds. No journal entry is needed. The system (or the preparer) documents the variance amount and a standard note ("Below materiality threshold, no investigation required"). The reviewer does not need to see this.
Tier 2: Preparer documents, reviewer skims. The variance exceeds one threshold but the root cause is known and recurring. Examples: a seasonal marketing campaign that spikes every Q4, a payroll increase from annual merit raises that were budgeted at a different effective date, or a hosting cost increase from a contract renewal that was anticipated. The preparer writes the explanation and attaches evidence. The reviewer confirms the explanation is reasonable without conducting an independent investigation.
Tier 3: Full review. The variance exceeds both thresholds, OR it is the first occurrence of this variance, OR it involves a sensitive account, OR it requires a journal entry, OR it is an expected variance that did not appear. These get the controller's full attention: the explanation is reviewed, the evidence is inspected, and any journal entries are approved before sign-off.
Routing Criterion | Tier 1 (Auto-doc) | Tier 2 (Skim) | Tier 3 (Full Review) |
|---|---|---|---|
Below both thresholds | Yes | ||
Exceeds one threshold, known/recurring cause | Yes | ||
Exceeds both thresholds | Yes | ||
First occurrence | Yes | ||
Journal entry required | Yes | ||
Expected variance absent | Yes |
The benefit of this approach is focus. If a company has 200 accounts, a well-calibrated threshold matrix might flag 40 for Tier 2 and 15 for Tier 3. The controller reviews 15 accounts deeply instead of 200 superficially. The quality of those 15 reviews goes up. The time spent on the other 185 goes down. And the documentation trail proves that every account was assessed, even the ones below threshold.
A useful calibration check: if more than 50% of accounts are hitting Tier 3 every month, thresholds are too tight. If fewer than 5% are hitting Tier 3, thresholds may be too loose, or the business is unusually stable, or the thresholds have not been updated as the company has grown. Review thresholds quarterly, and coordinate with your auditors during planning discussions.
What changes at month-end
Variance analysis during close is harder than variance analysis at any other time, for three reasons.
First, the numbers keep moving. Late journal entries, accruals posted in the last few days of close, and reclassifications change the trial balance underneath the variance analysis. An explanation written on Monday morning may be wrong by Tuesday afternoon because someone posted a $60,000 accrual to the same account. Teams that start flux before the books are fully closed gain speed but risk rework. Teams that wait until every entry is posted lose two days of the close window.
Second, the evidence is scattered. The GL tells you what happened. It rarely tells you why. Explaining a payroll variance requires the headcount report. Explaining an OpEx spike requires the vendor invoice and sometimes a conversation with the department head who approved the purchase. Every explanation that requires cross-functional evidence adds latency to the close.
Third, the quality of explanations degrades under time pressure. A controller with eight hours to explain 200 accounts will write thorough, evidence-backed narratives for the first 30 and increasingly terse notes for the rest. The last 50 accounts get "timing" or "normal fluctuation" or last month's explanation copied forward. This is where stale flux commentary accumulates, and it is what auditors notice first.
How this works in NetSuite
For teams running on NetSuite, the variance analysis workflow tends to follow the same pattern every month. Someone runs a comparative financial report or a saved search to pull the trial balance. The data gets exported into Excel. Columns are added for dollar change, percentage change, and explanation. The accountant then filters for large movements, opens a separate tab to pull transaction detail for each flagged account, and starts writing.
The problem is not that NetSuite lacks the data. NetSuite's Financial Report Builder can produce period-over-period comparisons, and saved searches can surface transaction-level detail by account, subsidiary, department, class, or location. The data is there. The problem is that the explanation and review layer sits outside the system.
Once the export hits Excel, version control breaks down. One preparer downloads detail at 9:00 AM. Another pulls a refreshed report after a late journal entry posts at 1:00 PM. A reviewer comments on an explanation that was built on numbers that have since changed. Someone pastes a summary into a slide deck, and the audit trail gets thinner with every handoff.
The other friction point is dimensionality. A controller who sees that marketing expense increased $400K cannot stop at the account total. They need to pivot by vendor to see that one agency drove 80% of the movement. They need to pivot by department to see whether the spend was concentrated in one business unit or spread across the company. In NetSuite, that means either running multiple saved searches or building a voluminous custom report and manually pivoting and filtering through it. Either way, the process is cumbersome, and each iteration adds time and room for error.
For NetSuite teams specifically, a stronger variance process should keep the source data current rather than frozen at the point of export. It should preserve the native dimensions teams already use (subsidiary, department, location, class, vendor, customer) without requiring manual reshaping. And when a reviewer questions an explanation, the evidence should be one click away rather than buried in a spreadsheet someone saved to their desktop three days ago. Maxima's native NetSuite integration addresses this directly, running flux analysis on top of NetSuite's data model rather than alongside it in a spreadsheet.
What a better workflow looks like
What changes with agentic AI is that variance analysis no longer starts with a blank spreadsheet and a manual hunt for drivers. The work is prepared before the review begins. Instead of calculating differences, pulling transaction detail, and writing explanations from scratch, the system surfaces what actually matters. Material variances are already identified. The underlying drivers are already broken down. Draft explanations are already written with supporting evidence attached. The accountant’s role shifts from assembling the analysis to evaluating it.
This is where platforms like Maxima come in. In practice, a controller opens to a list of accounts that have already been flagged. Data syncs directly from NetSuite and other ERPs and refreshes continuously, so flux analysis can begin before the books are fully closed. Materiality thresholds are configured by report and by account, using dollar thresholds, percentage thresholds, or both. Accounts that breach those thresholds are already waiting. The rest are documented and out of the way.
Instead of reconstructing why hosting expense moved $1.34 million, the controller opens to an explanation that already names the drivers, quantifies their impact, and ties the movement to specific transactions. “Hosting expense increased $1.34 million, driven by $1.37 million in retroactive Microsoft Azure billings for April through July posted in November” is an explanation a reviewer can act on. “Hosting expense increased” is not. From there, the investigation happens inside the workflow. The controller reviews the explanation, checks the linked transactions, and decides whether the logic holds. Last month’s explanation sits alongside the current one, making it clear whether the same root cause applies or something new has changed. If needed, they can go deeper by breaking the variance down across vendor, department, location, or product. Every number traces back to the underlying transactions, with direct links into the source system.
The workflow also surfaces what thresholds alone miss. Accrual reversals without re-accruals, duplicate entries, round-number anomalies, and balances that changed after the analysis are flagged automatically. The question shifts from “what changed?” to “is this correct?” Once the explanation is reviewed, adjusted, or approved, the audit trail captures each step. Every explanation follows a consistent structure, regardless of who prepared it or how much time pressure they were under. The output is not generated text. It is prepared accounting work.
The goal is not to remove accountants from variance analysis. It is to remove the manual assembly work so they can focus on what actually requires judgment: validating the drivers, identifying real issues, and ensuring the financials tell the right story. Variance analysis is not where accounting teams should spend their time. It is where they should apply their judgment. The difference is whether the work is prepared for them or assembled by them.
Variance analysis should not start with a blank spreadsheet. See how Maxima prepares explanations, traces movements to underlying transactions, and streamlines close review.
Frequently asked questions
How do you set materiality thresholds for variance analysis?
Use a dual threshold: a dollar amount and a percentage, with investigation triggered when either is exceeded. Common starting points are $25,000 or 10% for operating accounts, with tighter thresholds for revenue, payroll, and intercompany accounts. The right thresholds depend on the size of the company, the risk appetite of the controller, and what the external auditors expect. A useful rule: set your internal thresholds slightly below the auditor's materiality level so explanations are ready before audit fieldwork begins.
How do you write a good variance explanation?
A good explanation covers six fields: what account and period, what is the dollar and percentage variance, what is the root cause, what evidence supports it, who owns it, and does the variance recur. Avoid vague language like "timing differences" or "normal fluctuations" without specifying what the timing difference is or what makes the fluctuation normal. The test: a reviewer should be able to sign off without asking a follow-up question.
What evidence is needed to support a variance explanation?
It depends on the variance type. Revenue variances need contracts, invoices, and delivery documentation. Payroll variances need payroll registers and headcount reports. OpEx variances need vendor invoices, contract terms, and usage metrics. The common thread: the evidence should be specific to the root cause, not a generic data dump. Attach it with the explanation, not after the reviewer asks for it.
What happens when a variance investigation reveals an error?
If investigation uncovers a posting error, a missed accrual, or a misclassification, a correcting journal entry is prepared and posted before the financial statements are finalized. The variance explanation should document both the error and the correction, including the JE reference. This is one of the most valuable functions of variance analysis: it serves as a detective control that catches errors before they reach external stakeholders.
Can variance analysis be done before the books are fully closed?
Yes, and teams that start early gain a significant advantage. Most accounts stabilize within the first day or two of close, and those variances can be investigated immediately. The risk is that late entries may change balances after explanations are written. The mitigation is a system that flags which accounts had post-flux balance changes, so preparers can update only the affected explanations rather than starting over.
AI native accounting automation
to transform your month end close
AI native accounting automation to transform your month end close

Request demo
Insights, news and content
The latest
See all
Comparison
Compliance
Stay up to date on Maxima and AI accounting
The first agentic AI platform for enterprise accounting
© 2025 Indus AI Technologies, Inc. All rights reserved.
Comparison
Compliance
Stay up to date on Maxima and AI accounting
The first agentic AI platform for enterprise accounting
© 2025 Indus AI Technologies, Inc. All rights reserved.
Comparison
Compliance
Stay up to date on Maxima and AI accounting
The first agentic AI platform for enterprise accounting
© 2025 Indus AI Technologies, Inc. All rights reserved.
Comparison
Compliance
Stay up to date on Maxima and AI accounting
The first agentic AI platform for enterprise accounting
© 2025 Indus AI Technologies, Inc. All rights reserved.



