You are building a critical integration between Salesforce and a third-party REST API. Everything works perfectly during testing with small files. However, the moment you attempt to upload a file larger than 4.5MB, your code crashes with a frustrating error: common.apex.runtime.impl.ExecutionException: String length exceeds maximum: 6000000.

This error is a notorious roadblock for Salesforce developers handling file transfers. While Salesforce is a powerful CRM, its multi-tenant architecture imposes strict limits on resource consumption. When you encounter the 6,000,000 character limit, you have reached the hard ceiling for a single String instance in Apex.

In this guide, we will analyze why this happens, look at the common code patterns that trigger it, and explore architectural strategies to bypass these limits effectively.

Understanding the 6,000,000 Character Limit

In the Salesforce Apex runtime, a single String variable cannot exceed 6 million characters. While the total heap size for synchronous Apex is 6MB (and 12MB for asynchronous), the String length limit is an independent constraint that often catches developers off guard.

When dealing with file uploads, the problem is compounded by Base64 encoding. To send binary data (like a PDF or image) over a text-based protocol like HTTP, you must encode the data. Base64 encoding increases the data size by approximately 33%.

Mathematically, this means a file that is only 4.5MB in size will swell to over 6 million characters once encoded into a String. If you attempt to concatenate that string with multipart headers or footers, you will immediately hit the limit.

The Common Pitfall: Manual Multipart Construction

Many third-party APIs require multipart/form-data requests. Since Apex does not natively support a "Multipart Writer" class, developers often resort to a manual approach: encoding the file, encoding the headers, and stitching them together using EncodingUtil.

Here is a common code pattern used to construct these requests:

public static void UploadAttachment(Attachment attachment, String folder){

    String boundary = '----------------------------741e90d31eff';
    String header = '--'+boundary+'\nContent-Disposition: form-data; name="file"; filename="'+attachment.name+'";\nContent-Type: application/octet-stream';
    String footer = '\r\n--'+boundary+'--';              
    String headerEncoded = EncodingUtil.base64Encode(Blob.valueOf(header+'\r\n\r\n'));

    // Aligning the header to base64 boundaries
    while(headerEncoded.endsWith('=')){
        header+=' ';
        headerEncoded = EncodingUtil.base64Encode(Blob.valueOf(header+'\r\n\r\n'));
    }

    String bodyEncoded = EncodingUtil.base64Encode(attachment.body);
    String footerEncoded = EncodingUtil.base64Encode(Blob.valueOf(footer));

    Blob bodyBlob = null;
    String last4Bytes = bodyEncoded.subString(bodyEncoded.length()-4,bodyEncoded.length());

    // Handling the padding to ensure the final Blob is valid
    if(last4Bytes.endsWith('=')){
        Blob decoded4Bytes = EncodingUtil.base64Decode(last4Bytes);
        HttpRequest tmp = new HttpRequest();
        tmp.setBodyAsBlob(decoded4Bytes);
        String last4BytesFooter = tmp.getBody()+footer;   
        bodyBlob = EncodingUtil.base64Decode(headerEncoded+bodyEncoded.subString(0,bodyEncoded.length()-4)+EncodingUtil.base64Encode(Blob.valueOf(last4BytesFooter)));
    }
    else{
        bodyBlob = EncodingUtil.base64Decode(headerEncoded+bodyEncoded+footerEncoded);
    }

    HttpRequest req = new HttpRequest();
    req.setHeader('Authorization', 'Your_Auth_Token');
    req.setHeader('Content-Type','multipart/form-data; boundary='+boundary);
    req.setMethod('POST');
    req.setEndpoint('https://api.thirdparty.com/upload/' + folder);
    req.setBodyAsBlob(bodyBlob);
    req.setTimeout(120000);

    Http http = new Http();
    HTTPResponse res = http.send(req);
}

Why this fails

The line bodyBlob = EncodingUtil.base64Decode(headerEncoded + bodyEncoded + footerEncoded); is the primary culprit. Even if the resulting Blob would fit within the heap limit, the intermediate step of concatenating headerEncoded + bodyEncoded + footerEncoded creates a new String in memory. If the sum of those characters exceeds 6,000,000, the execution fails instantly.

Strategy 1: Off-Platform Processing (Middleware)

When you consistently need to handle files larger than 4-5MB, the most robust architectural decision is to move the heavy lifting off the Salesforce platform. Salesforce is designed for business logic and data management, not high-volume binary stream manipulation.

Using a Heroku Middleware

You can create a small Node.js, Python, or Java application hosted on Heroku. The process looks like this: 1. Salesforce sends the Attachment ID or ContentVersion ID to the Heroku app. 2. The Heroku app uses the Salesforce REST API (or a library like JSforce) to download the file directly into its own memory space. 3. The Heroku app performs the multipart/form-data construction and forwards the file to the 3rd party API. 4. Heroku returns a success/failure status back to Salesforce.

This approach bypasses Apex limits entirely because the binary manipulation happens in an environment with much more flexible memory constraints.

Strategy 2: Client-Side Uploads via LWC

If the file upload is triggered by a user action in the UI, you can bypass Apex limits by performing the upload directly from the browser using a Lightning Web Component (LWC).

By using the JavaScript fetch() API or XMLHttpRequest, you can construct the multipart request in the user's browser. Since the browser (client-side) is sending the data directly to the 3rd party service, the file never passes through the Apex runtime.

Pros: - No Apex heap or String limits. - Faster performance as data doesn't hop through Salesforce servers. - Better user experience with real-time progress bars.

Cons: - You must handle Cross-Origin Resource Sharing (CORS) on the 3rd party server. - You may need to proxy authentication or use secure tokens to ensure the 3rd party API is not exposed.

Strategy 3: Chunked Uploads

Check if the 3rd party API supports chunked or resumable uploads. Some modern APIs (like Google Drive, Box, or AWS S3) allow you to break a file into smaller pieces (e.g., 1MB chunks) and send them sequentially.

In Apex, you can use subString logic to break the base64 string into smaller parts and send multiple HTTP requests. The external service then reassembles the chunks on their end. This keeps your String lengths well below the 6,000,000 character limit and stays within heap limits.

Frequently Asked Questions

Is the String limit different for Asynchronous Apex?

No. While the total heap size increases from 6MB to 12MB for Batch Apex and Queueable jobs, the maximum length of a single String remains 6,000,000 characters. You cannot simply move the code to a @future method to fix this specific error.

Can I use ContentVersion instead of Attachment to fix this?

Using ContentVersion is a best practice for file storage in Salesforce, but it does not change the Apex String limit. Whether you pull the body from an Attachment or a VersionData field, you are still dealing with a Blob that must be converted to a String for manual multipart construction, hitting the same ceiling.

Wrapping Up

The String length exceeds maximum: 6000000 error is a clear signal that your integration has outgrown the capabilities of native Apex binary manipulation. For files under 4MB, the manual base64 concatenation method works reasonably well. However, for larger files, you must look toward off-platform middleware or client-side LWC uploads.

By shifting the processing power to the client's browser or a dedicated middleware service like Heroku, you ensure your integration remains scalable, performant, and free from the restrictive governor limits of the Salesforce runtime.