〰️LWC, Apex and Flows
In this article we will take a look at a more complex logging scenario involving an LWC component, a corresponding Apex controller, and an autolaunched Flow.
In this article, we'll explore a more complex logging scenario involving three Salesforce technologies: an LWC component, its Apex controller, and an autolaunched Flow. Our goal is to produce a unified log trace across all three layers while demonstrating both direct parameter passing and cache-based approaches for transaction management.
The scenario follows this sequence:
User clicks a button in LWC
Apex controller performs a DML operation
Autolaunched Flow triggers from the DML
Flow performs some operations and logs its progress
Implementation Approaches
Let's examine two different approaches to managing transaction IDs across these contexts, with separate implementations for each.
Approach 1: Direct Parameter Passing
This approach explicitly passes the transaction ID through method parameters.
LWC Controller
// logDemoDirectParams.js
import apexActionThatTriggersFlow from '@salesforce/apex/LogDemoDirectController.apexActionThatTriggersFlow';
import Triton, { AREA, TYPE } from 'c/triton';
export default class LogDemoDirectParams extends LightningElement {
triton;
connectedCallback() {
// Bind triton to this component
this.triton = new Triton().bindToComponent('LogDemoDirectParams');
}
async executeScenario() {
// Create initial LWC log and get transaction ID
await this.triton.logNow(
this.triton.debug(TYPE.FRONTEND, AREA.ACCOUNTS)
.summary('Flow Demo - Start')
.details('Starting execution with direct parameter passing')
);
try {
// Pass transaction ID to Apex
await apexActionThatTriggersFlow({
transactionId: this.triton.transactionId
});
} catch (error) {
await this.triton.logNow(this.triton.exception(error));
}
}
}
Apex Controller
// LogDemoDirectController.cls
public with sharing class LogDemoDirectController {
@AuraEnabled
public static void apexActionThatTriggersFlow(String transactionId) {
// Resume the transaction started in LWC
Triton.instance.resumeTransaction(transactionId);
Triton.instance.debug(
TritonTypes.Type.Backend,
TritonTypes.Area.Community,
'Flow Demo - Apex',
'Performing DML to trigger flow'
);
try {
// Update account to trigger flow
List<Account> accounts = [SELECT Id, Name FROM Account LIMIT 1];
if (!accounts.isEmpty()) {
accounts[0].Name = 'Flow Demo';
update accounts;
}
} catch (Exception e) {
Triton.instance.error(TritonTypes.Area.Community, e);
}
}
}
Approach 2: Using Platform Cache
This approach uses platform cache to automatically share the transaction ID.
LWC Controller
// logDemoCache.js
import apexActionThatTriggersFlow from '@salesforce/apex/LogDemoCacheController.apexActionThatTriggersFlow';
import Triton, { AREA, TYPE } from 'c/triton';
export default class LogDemoCache extends LightningElement {
triton;
connectedCallback() {
// Bind triton to this component
this.triton = new Triton().bindToComponent('LogDemoCache');
}
async executeScenario() {
// Initial log will be cached automatically
await this.triton.logNow(
this.triton.debug(TYPE.FRONTEND, AREA.ACCOUNTS)
.summary('Flow Demo - Start')
.details('Starting execution with cache enabled')
);
try {
// No need to pass transaction ID
await apexActionThatTriggersFlow();
} catch (error) {
await this.triton.logNow(this.triton.exception(error));
}
}
}
Apex Controller
// LogDemoCacheController.cls
public with sharing class LogDemoCacheController {
@AuraEnabled
public static void apexActionThatTriggersFlow() {
// Enable cache to automatically retrieve transaction ID
Triton.instance.withCache();
Triton.instance.debug(
TritonTypes.Type.Backend,
TritonTypes.Area.Community,
'Flow Demo - Apex',
'Performing DML to trigger flow'
);
try {
// Update account to trigger flow
List<Account> accounts = [SELECT Id, Name FROM Account LIMIT 1];
if (!accounts.isEmpty()) {
accounts[0].Name = 'Flow Demo';
update accounts;
}
} catch (Exception e) {
Triton.instance.error(TritonTypes.Area.Community, e);
}
}
}
Transaction ID Flow
Understanding how the transaction ID connects logs across contexts is crucial:
LWC Initialization
Direct Params: Transaction ID is generated when first log is created
Cache: Transaction ID is generated and automatically cached
Apex Execution
Direct Params: Transaction ID is passed via method parameter and resumed
Cache: Transaction ID is automatically retrieved from cache
Flow Execution
The Flow's Interview GUID is automatically associated with the transaction
All Flow logs are grouped under the same transaction
Here's what the resulting log structure looks like in Pharos:
Transaction: 7fb8c320-e8f9-4f1b-9742-1a5768b9e0d2
├── LWC Debug Log (Start)
├── Apex Debug Log (DML Operation)
└── Flow Execution
├── Flow Start Log
├── Flow Operation Log
└── Flow Completion Log
Apex Controller
The Apex controller can handle both approaches:
public with sharing class LogDemoController {
@AuraEnabled
public static void apexActionThatTriggersFlow(String transactionId, Boolean cacheEnabled) {
if (cacheEnabled) {
Triton.instance.withCache();
} else if (String.isNotBlank(transactionId)) {
Triton.instance.resumeTransaction(transactionId);
}
Triton.instance.debug(
TritonTypes.Type.Backend,
TritonTypes.Area.Community,
'Flow Demo - Apex',
'Performing DML to trigger flow'
);
try {
// Update account to trigger flow
List<Account> accounts = [SELECT Id, Name FROM Account LIMIT 1];
if (!accounts.isEmpty()) {
accounts[0].Name = 'Flow Demo';
update accounts;
}
} catch (Exception e) {
Triton.instance.error(TritonTypes.Area.Community, e);
}
}
}
The Flow
The Flow setup remains similar to what's detailed in Flows, with one key difference - we need to ensure the Flow can access the same transaction context regardless of which approach was used to manage the transaction ID.
Flow Configuration
Add the Triton Apex action as the first element
Pass the Interview GUID using
$Flow.InterviewGuid
Configure logging parameters as described in the Flows documentation

Choosing Between Approaches
Direct Parameter Passing
Best for simple, linear execution paths
More explicit and easier to debug
Recommended for initial implementation
Platform Cache
Better for complex scenarios with multiple components
Handles asynchronous operations more elegantly
Useful when direct parameter passing becomes cumbersome
The Result
Regardless of the approach chosen, you'll see this log structure:
Parent LWC Debug Log
Child Apex Debug Log
Child Flow Debug Log
Additional Flow logs as configured
Any error logs if exceptions occur
This unified view provides complete visibility into your execution path across all three technologies.
Handling Flow Failures
When an unhandled error occurs in your Flow, Pharos automatically captures and correlates this failure with your existing transaction logs. This is possible because:
The Flow's Interview GUID is passed to each Triton log action
Pharos automatically captures unhandled Flow errors
The transaction ID links everything together
Example of an Unhandled Flow Error
Let's modify our example to trigger an error in the Flow:
// LogDemoCacheController.cls
public with sharing class LogDemoCacheController {
@AuraEnabled
public static void apexActionThatTriggersFlow() {
Triton.instance.withCache();
Triton.instance.debug(
TritonTypes.Type.Backend,
TritonTypes.Area.Community,
'Flow Demo - Apex',
'Triggering flow that will fail'
);
try {
// Update account with invalid data to cause Flow failure
List<Account> accounts = [SELECT Id, Name FROM Account LIMIT 1];
if (!accounts.isEmpty()) {
accounts[0].Name = null; // This will cause Flow validation to fail
update accounts;
}
} catch (Exception e) {
Triton.instance.error(TritonTypes.Area.Community, e);
}
}
}
Resulting Log Structure
When the Flow fails, you'll see this structure in Pharos:
Transaction: 7fb8c320-e8f9-4f1b-9742-1a5768b9e0d2
├── LWC Debug Log (Start)
├── Apex Debug Log (DML Operation)
└── Flow Execution
├── Flow Start Log
├── Flow Operation Log
└── Unhandled Flow Error
└── Flow Debug Details
├── Interview GUID
├── Current Element
├── Error Message
└── Stack Trace
Benefits of Automatic Error Correlation
Complete Context: See the entire execution path that led to the failure
No Extra Code: No additional error handling required to capture Flow failures
Production Debugging: Valuable insight into production issues without debug logs
Time Savings: Quickly identify where in the process the failure occurred
For more details on Flow logging and error handling, see the Flows documentation.
Last updated