7Block Labs
Blockchain Development

ByAUJay

Web3 API Design: Creating API for Web3 and Streamlining API Development for Seamless Blockchain Integration

Description: This is a hands-on guide for decision-makers who want to create dependable, secure, and scalable Web3 APIs that work seamlessly with both modern Ethereum and Layer 2 solutions. We'll dive into the essentials like JSON-RPC basics, finality semantics, EIP-1559/4844 fees, event streaming, account abstraction, indexing, simulation, and best operational practices. Plus, you’ll find solid examples and insights into up-and-coming standards.

Why Web3 API design is different

Traditional API design patterns can really struggle when your backend is a decentralized network. We're talking about issues like probabilistic inclusion, reorgs, multiple execution clients, and standards that seem to change overnight. So, if you want a solid Web3 API, it needs to:

  1. Handle Uncertainty: APIs should be able to deal with the unpredictable nature of decentralized networks. This means being ready for things like delayed transactions and varying confirmation times.
  2. Support Multiple Clients: Your API should work seamlessly with different execution clients. This ensures that no matter what the user prefers, they can interact with your service without a hitch.
  3. Adapt to Changing Standards: The Web3 landscape is always evolving, so your API needs to be flexible enough to accommodate new protocols and standards as they emerge.
  4. Ensure Security: Given the decentralized system's vulnerabilities, having robust security measures is crucial. This means implementing strong authentication methods and keeping data secure at all times.
  5. Be Developer-Friendly: Make your API easy for developers to use. Clear documentation, helpful error messages, and comprehensive examples can go a long way in making their lives easier.

By focusing on these key areas, you’ll create an API that not only meets the unique challenges of a decentralized environment but also stands the test of time.

  • Make sure to include chain finality and reorg risk in every read/write path.
  • Tackle different client behaviors while keeping it compatible with the usual Ethereum JSON-RPC surface. (eips.ethereum.org)
  • Get ready to support the fee and data-availability markets that EIP-1559 and EIP-4844 brought into the mix. (eips.ethereum.org)
  • Provide both pull (RPC/GraphQL) and push (WebSocket/webhooks/streams) data flows. (chainnodes.org)
  • Adapt to modern wallet flows (EIP-1193/6963) and account abstraction (ERC-4337, ERC-6900). (eips.ethereum.org)

Here’s a handy guide that 7Block Labs swears by to help startups and established businesses roll out Web3 features without a hitch.


1) Establish your canonical API surface

Execution Layer JSON‑RPC Methods

Let’s kick things off with the execution-layer JSON‑RPC methods that have been standardized for Ethereum clients, specifically through EIP‑1474 (execution-apis). Think of this as your basic toolkit; anything beyond this is just icing on the cake. Here are some key points:

  • Check out the JSON-RPC 2.0 shape and batching rules; you'll find error ranges from −32768 to −32000 and server errors between −32000 and −32099. (json-rpc.org)
  • Dive into value encoding and block parameter semantics, which cover tags like latest, pending, safe, and finalized. (eips.ethereum.org)
  • Explore how EIP‑1898 introduces an object form for block selection, helping to clear up any confusion during reorgs. (eips.ethereum.org)

Example: pin a read to a specific block hash to keep it reorg-safe.

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "eth_call",
  "params": [
    {"to": "0xYourContract", "data": "0xYourCalldata"},
    {"blockHash": "0xabc...def", "requireCanonical": true}
  ]
}

2) Finality-aware reads and SLAs

Executives are all about those clear-cut dashboards, while engineers are juggling the tricky balance between latency and safety.

  • Ethereum's Proof of Stake (PoS) wraps up blocks in about two epochs, which usually comes out to around 12 to 15 minutes, but it can sometimes take a bit longer. There’s some cool research from SSF that's looking to speed things up. Just remember to use “safe” for data that’s almost finalized and “finalized” for those super solid settlement reads. Check out more about it here.
  • When you're setting up your API, make sure to include read-levels like fast (latest), safe, and finalized. It’s a good idea to back these up with the JSON-RPC tags, so clients can pick the level of risk they’re comfortable with. For more details, dive into the docs here.

Example: “safe” inventory query:

{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "eth_getBalance",
  "params": ["0xAddress", "safe"]
}

3) Fees, throughput, and blobs: design for EIP‑1559 and EIP‑4844

Fee estimation and transaction orchestration need to account for the two markets that are currently active on Ethereum and rollups.

  • EIP‑1559 rolls out the baseFee + priorityFee system with type‑2 transactions. If you want to check out percentile tips, just use the eth_feeHistory function; many service providers also show baseFeePerBlobGas and blobGasUsedRatio in their responses. You can read more about it here.
  • With EIP‑4844, we get type‑3 “blob-carrying” transactions, which create a whole new blob gas market! The goal here is to hit 3 blobs per block, with a max of 6. The blob base fee jumps up exponentially when there's excessive usage, making it a big deal for L2 posting costs and how we observe things. More details can be found here.
  • Keep an eye on blob fees, as they can skyrocket during non‑L2 activities (like those “blobscriptions”). We’ve seen them shoot up from just 1 wei to around 650 gwei. Your API should definitely include blob fee telemetry and have fallbacks ready for when blobs aren't the best economic choice. You can dive deeper into this topic here.

Practical Tactic:

Let’s set up a straightforward “estimateFees” endpoint that gives back some handy info:

  • Suggestions for baseFee and priorityFee pulled from eth_feeHistory.
  • A snapshot of the blob base fee along with a projection for the next 5-10 blocks.
  • A recommendation on the transaction type (either 2 or 3) based on the size of the payload.

Reference: The eth_feeHistory function gives you both EIP‑1559 and blob fee fields on networks that support them. Check it out here: (quicknode.com)


4) Event data at scale: logs, filters, and chunking

Most apps really rely on indexed events. Here are some common pitfalls to watch out for:

  • When using eth_getLogs, keep in mind that most providers have some limits in place, like a cap of 10,000 results and a timeout that’s usually around 10 seconds. To dodge those pesky −32005 errors, chunk your queries by block ranges and narrow them down by address or topics. Check out more details here.
  • If you’re working on larger scans, try to break it down into 3,000 to 5,000 block windows on the Ethereum mainnet. A lot of providers suggest these kinds of practical limits. You can find additional info here.
  • Steer clear of the old eth_newFilter polling for historical data; it’s better to use the stateless eth_getLogs along with provider streams for real-time updates. More on this can be found here.

Example: Chunked Backfill with Safe Range

Let's dive into how chunked backfill works when we set a safe range:

1. Understanding Chunked Backfill

Chunked backfill is all about efficiently filling in the gaps in your data. By splitting this process into manageable pieces, or "chunks," you can keep your operations smooth and avoid overwhelming your system.

2. Setting a Safe Range

The safe range is the set limit that ensures you’re not pushing the boundaries of your system’s capacity. It’s like creating a comfortable buffer zone. Here’s how to define it:

  • Minimum Threshold: This is the lowest point you can handle without issues.
  • Maximum Threshold: This sets an upper limit, ensuring you don't overload your system.

3. Implementing Chunked Backfill

To implement chunked backfill with a safe range, you’d typically follow a few steps in your code:

def chunked_backfill(data, min_threshold, max_threshold):
    for chunk in split_data_into_chunks(data):
        if check_within_safe_range(chunk, min_threshold, max_threshold):
            process_chunk(chunk)
        else:
            raise ValueError("Chunk out of safe range!")

4. Visual Representation

It might help to visualize how the process works. Here's a simple chart to illustrate:

Safe RangeProcessed ChunksStatus
[Min, Max][Chunk 1, Chunk 2]Successful

5. Conclusion

By chunking backfill processes within a safe range, you enhance the reliability of your data handling. It reduces risk and keeps everything running smoothly.

[
  {
    "jsonrpc":"2.0","id":1,"method":"eth_getLogs",
    "params":[{"fromBlock":"0xA1B2C3","toBlock":"0xA1D4EF","address":"0xToken","topics":["0xddf252ad..."],"toBlockTag":"safe"}]
  }
]

Note: A few clients provide some handy non-standard accelerants, such as eth_getBlockReceipts (works with Erigon and Geth ≥1.13). This feature helps you gather receipts per block in a super efficient way--so make sure to check for its availability and have a smooth fallback option in case it’s not there. (docs.chainstack.com)


5) Real‑time delivery: WebSockets, webhooks, and streams

Offer Push Alternatives to Cut Down on Polling Costs and Latency:

When it comes to optimizing how we receive updates, switching from polling to more efficient methods can make a world of difference. Here are some alternatives to consider:

  1. WebSockets
    WebSockets create a persistent connection between the client and server, allowing for real-time data exchange without the need for constant polling. This can significantly reduce overhead and latency.
  2. Server-Sent Events (SSE)
    SSE is a simple way to stream updates from the server to the client over a single HTTP connection. It’s perfect for scenarios where you need one-way communication, like live sports scores or stock updates.
  3. Push Notifications
    Implementing push notifications can keep users informed without the app constantly polling for updates. This is especially useful for mobile applications, as it allows for timely alerts without draining battery life.
  4. GraphQL Subscriptions
    If you're using GraphQL, subscriptions allow you to receive real-time updates whenever data changes. This is a powerful way to keep clients in sync without relying on periodic polling.
  5. HTTP/2 Server Push
    Take advantage of HTTP/2's server push feature to send resources to the client before they even request them. This can lower latency and improve loading times for content-heavy sites.

Each of these methods has its own perks, so pick the one that suits your needs best. Opting for them can lead to a leaner, faster, and more responsive application overall!

  • You can use WebSocket eth_subscribe for tracking new heads, logs, and pending transactions. It's perfect for sessioned backends and trading-type scenarios. Check it out here: (chainnodes.org).
  • If running persistent sockets at scale isn't your thing, give provider webhooks a shot. For example, Alchemy’s Address/NFT Activity can track millions of addresses per webhook and send the info into your queue. Find more details here: (alchemy.com).
  • When it comes to heavy analytics, consider using The Graph Substreams. They help parallelize backprocessing and ensure reliable cursoring; teams have seen big improvements in speed and cost compared to raw RPC. Dive into it here: (thegraph.com).
  • QuickNode Streams offers managed backfills and live streams across various chains. Take a look: (quicknode.com).

Example: Basic WS Subscribe

In this example, we’ll take a look at how to set up a basic WebSocket subscription. WebSockets are a fantastic way to keep your application updated in real-time by maintaining a persistent connection with the server.

Step-by-step guide

  1. Set up your WebSocket connection
    First, you need to establish a connection to your WebSocket server. Here’s a simple example in JavaScript:

    const socket = new WebSocket('wss://your-websocket-url');
  2. Listen for connection open event
    Next, let’s set up an event listener that runs when the connection is open. This is where we can send our subscription request.

    socket.addEventListener('open', function (event) {
        socket.send(JSON.stringify({
            action: "subscribe",
            channel: "your_channel_name"
        }));
    });
  3. Handle incoming messages
    Now that we’re subscribed, we’ll want to handle any messages that come through. Just add another event listener for the message event.

    socket.addEventListener('message', function (event) {
        const data = JSON.parse(event.data);
        console.log('Incoming data:', data);
    });
  4. Handle errors
    It’s also a good idea to handle any potential errors. You can do this by listening for the error event.

    socket.addEventListener('error', function (error) {
        console.error('WebSocket Error: ', error);
    });
  5. Clean up on close
    Lastly, it’s important to handle closing the connection properly. You can listen for the close event to clean up any resources.

    socket.addEventListener('close', function (event) {
        console.log('WebSocket connection closed:', event);
    });

Summary

Setting up a basic WebSocket subscription isn't too tricky! Just follow these steps, and you'll have real-time updates flowing into your application in no time. Happy coding!

wscat -c wss://mainnet.your-provider.example/ws
> {"id":1,"jsonrpc":"2.0","method":"eth_subscribe","params":["logs",{"address":"0xToken","topics":["0xddf252ad..."]}]}

Providers have different approaches when it comes to pending transaction exposure and rate controls--so it's a good idea to include some plan-aware safeguards. Check it out here: (chainnodes.org)


6) Wallets, providers, and discovery

Frontends are leaning more and more on EIP-1193 provider events, along with the multi-wallet discovery standard.

  • Make sure to manage the provider's request() method along with the events like connect, disconnect, chainChanged, and accountsChanged. Check it out here: (eips.ethereum.org).
  • Don't forget to implement EIP‑6963 so your dapp can find multiple injected wallets, which helps avoid those pesky window.ethereum collisions. If you're using libraries like Web3.js v4, they have some handy helpers for this! More details here: (eip.info).

Authentication and Authorization:

When we talk about authentication and authorization, we're diving into two key concepts that are essential for keeping our digital spaces secure.

Authentication:

Authentication is all about figuring out who you are. It’s the process that verifies your identity. Think of it like showing your ID at a bar. Here’s how it works:

  • Username and Password: The most common way. You enter your username and password to prove you are who you say you are.
  • Two-Factor Authentication (2FA): Adds an extra layer of security. After entering your password, you might also need to confirm your identity with a code sent to your phone.
  • Biometrics: This includes fingerprint scanners or facial recognition, which rely on your unique physical traits.

Authorization:

Once you’ve been authenticated, it’s time for authorization. This is where we determine what you are allowed to do. It’s like having a VIP pass at a concert. Here’s how this usually breaks down:

  • Role-Based Access Control (RBAC): You get permissions based on your role in the organization. For example, a manager might have different access rights compared to a regular employee.
  • Attribute-Based Access Control (ABAC): Your access is determined by certain attributes (like your department or location) in addition to your role.
  • Policies: These are specific rules that govern what users can and can't do. They can be very granular, allowing for complex permission structures.

Why It Matters:

Understanding the difference between authentication and authorization is crucial because they serve distinct but complementary purposes in security. Without proper authentication, anyone could pretend to be someone else. Without proper authorization, even authenticated users could access sensitive information they shouldn’t.

By keeping these two concepts in mind, we can create a safer online experience for everyone!

  • Sign-In with Ethereum (ERC-4361) gives us a cool standard for an origin-bound message format. Pair this with server-side signature validation and maybe some session cookies or JWTs for a smoother experience. Check it out here.
  • If you're looking to handle permissioned API actions, layer on “ReCaps” (ERC-5573) to enhance SIWE. This lets you grant specific rights while ensuring users give explicit consent. More info can be found here.
  • For those typed off-chain messages, like permits or meta-txs, the way to go is with EIP-712. You can read up on it here.

7) Account abstraction and modular accounts

Plan for AA where users interact via smart contract accounts:

Overview

We're diving into a concept that allows users to interact seamlessly through smart contract accounts. This approach focuses on enhancing user experience and improving security across different applications.

Key Features

  1. Smart Contract Accounts
    Users will be able to create and manage their own smart contract accounts. This setup enables more complex interactions and customized functionalities, unlike traditional accounts.
  2. Seamless User Interaction
    Through these accounts, users can engage with various dApps effortlessly. It’ll feel more like using familiar apps rather than interacting with the blockchain directly.
  3. Enhanced Security
    Smart contract accounts provide an added layer of security. Users can set multiple conditions for transactions, ensuring their assets are better protected.

User Benefits

  • Customizability
    Users can personalize their accounts with tailored smart contracts that meet their specific needs, such as multi-signature requirements or automated transaction triggers.
  • Improved User Experience
    With easy-to-use interfaces that abstract away the complexities of blockchain technology, users can navigate the ecosystem without needing to understand the underlying mechanics.
  • Increased Control
    Smart contract accounts give users more control over their assets, allowing them to set precise permissions and manage interactions by modifying the smart contract logic.

Technical Details

To implement this, we’ll need to focus on several key components:

  • Smart Contract Development
    We'll utilize languages like Solidity or Vyper to create robust and secure smart contracts for user accounts.
  • User Interface
    A user-friendly interface that allows users to interact with their smart contract accounts without any hassle.
  • Integration
    Ensure that these accounts can seamlessly connect with various dApps and services, enhancing their usability in the broader ecosystem.

Next Steps

  • Research
    Further investigate existing smart contract libraries and frameworks that can streamline our development process.
  • Prototype
    Develop an initial prototype to test the user experience and functionality of smart contract accounts.
  • Feedback Loop
    Gather user feedback to refine and improve the service based on real-world interaction.

This plan sets the stage for an innovative approach to user interactions in the blockchain space, making it easier, safer, and more customizable than ever before.

  • So, ERC‑4337 rolls out the UserOperation with some cool features like bundlers and an EntryPoint contract. This means that AA wallets can now set up custom validation, sponsors, session keys, and recovery options. Make sure your API can handle UO submission, simulation, and status updates. Check it out here: (eips.ethereum.org).
  • Then there's the modular smart accounts (ERC‑6900, draft) which are all about standardizing how accounts and modules interact--think validations, execution functions, and hooks. This should really boost interoperability among AA wallets and modules. Don’t forget to keep your gateway flexible so it can support these interfaces as they get solidified. You can read more about it here: (eips.ethereum.org).

Tip: make sure to expose the “simulateUserOp” and “estimateUserOpGas” endpoints, reflecting the new RPCs. This way, if you ever need to switch providers or clients down the line, your customers won’t feel a thing. (A bunch of AA RPCs actually use the same state-override logic as eth_call.) (eip.directory)


8) Preflight simulation and safety rails

Pre-Execution Checks for Transactions on Mainnet or L2

Before you hit that send button for your transactions on mainnet or L2, it’s a good idea to run through some pre-execution checks. Here’s what you should consider:

  1. Transaction Fees: Make sure you’re aware of the current gas prices. You don’t want to be surprised by high fees at the last minute. Check sites like Etherscan Gas Tracker to stay updated.
  2. Contract Address: Double-check that you’re sending to the right contract address. A single character off could lead to a lost transaction.
  3. Input Data: Review any input data or parameters you're sending. It's crucial to ensure everything is formatted correctly and makes sense before confirming.
  4. Network Status: Look at the network status to see if it’s congested. If lots of transactions are pending, it might be worth waiting a bit for things to clear up.
  5. Token Approval: If you're interacting with tokens, make sure you have the proper approval set for your transaction. You don’t want to get tripped up at execution time!
  6. Testing on Testnet or Local Environment: It never hurts to test your transaction on a testnet or in a local environment first. This can save you time and money.
  7. Smart Contract Audit: If you’ve built or are interacting with any smart contracts, ensure they’ve been audited. It’s a good way to catch potential issues before you dive in.
  8. Reputation of the Address: Check the reputation of the address you’re sending to. Tools like Ethervigil can help you verify if it’s safe.

By following these pre-execution checks, you can help ensure your transactions go smoothly on mainnet or L2. Happy transacting!

  • When you're using eth_call, it's best to target a specific block; going for “safe” or “finalized” blocks will give you more reliable simulations for end users. Check out the details on ethereum.org.
  • Want to simulate balances, bytecode, or storage without having to redeploy your test fixtures? Just use state overrides! This feature is supported in Geth and you can find more info in the ecosystem tools documentation on geth.ethereum.org.
  • If you're dealing with complex interactions, consider creating access lists with eth_createAccessList (EIP-2930) to minimize those cold-access penalties. More info can be found at support.huaweicloud.com.

Example: Simulate with a Temporary Balance and Storage Slot

To simulate this, we can use a temporary balance alongside a storage slot. Here's a basic approach to illustrate how this works:

pragma solidity ^0.8.0;

contract TemporaryStorage {
    mapping(address => uint256) public balances;

    function simulateTemporaryBalance(address user, uint256 amount) public {
        uint256 originalBalance = balances[user]; // Save original balance
        balances[user] += amount; // Simulate adding to balance

        // Your logic here...

        balances[user] = originalBalance; // Restore original balance
    }
}

In this example, we create a simple contract that allows you to simulate a temporary balance for a user, making sure to save and restore their original balance afterwards. This way, you can test your logic without messing up any real data!

{
  "jsonrpc": "2.0",
  "id": 3,
  "method": "eth_call",
  "params": [
    {"to":"0xYourContract","data":"0x..."},
    "safe",
    {
      "0xUserAddress": {"balance": "0x8AC7230489E80000"},
      "0xYourContract": {"stateDiff": {"0x<slot>": "0x<value>"}}
    }
  ]
}

9) Robust write path: submission, retries, and confirmations

Simplifying the Mempool’s Quirks with a Clean API and Idempotency

Navigating the quirks of the mempool can be a bit tricky, right? But don’t worry! We can streamline that complexity by putting together a clean API that embraces idempotency.

What’s the Mempool Anyway?

In the world of blockchain, the mempool (short for memory pool) is like a waiting room for transactions. It’s where all the pending transactions hang out before they get confirmed and added to a block.

Why Does It Matter?

Understanding the mempool is crucial because:

  • Performance: It helps manage how transactions are processed and how quickly they get confirmed.
  • Fees: Transaction fees can fluctuate based on mempool conditions, impacting how quickly your transaction gets picked up by miners.
  • Complexity: It presents unique challenges, especially when it comes to ensuring reliable and efficient communication between clients and nodes.

Making Sense of Mempool Quirks

The mempool can be unpredictable, with fluctuations in transaction volume and fee market dynamics. To address these quirks effectively, we can design a clean API that abstracts this complexity.

Key Features of the Clean API

  1. User-Friendly: Simplify interactions with clear, concise endpoints.
  2. Idempotency: Allow clients to safely re-send requests without worrying about duplicate transactions. This means that sending the same command multiple times will yield the same outcome, avoiding confusion and potential issues.
  3. Robust Error Handling: Ensure that any hiccups are dealt with gracefully, making it easier for developers to troubleshoot.
  4. Rich Return Data: Provide more context with each response, giving users the information they need to understand the status of their transactions.

Conclusion

By abstracting the mempool's quirks behind a clean API that incorporates idempotency, we can greatly enhance the user experience. This way, developers can focus on building amazing applications without getting bogged down by the underlying complexities.

So, let’s elevate the way we interact with the mempool and make life a little easier for everyone!

  • You might run into some common send errors like “replacement transaction underpriced,” “nonce too low,” and “already known.” It’s a good idea for your gateway to standardize these different provider messages and give some recovery advice (like bumping the tip for replacements). Check it out here.
  • Make sure to use idempotency keys and safe retry strategies. If you run across that “already known” error, just treat it as a success-pending status and you can resolve it by checking the transaction hash. More details can be found here.
  • When it comes to reading data, don’t just assume inclusion until your confirmation policy says otherwise. For the Ethereum mainnet, your API should show two thresholds:

    • “included”: this means it was first seen in a confirmed block (notify users quickly, but remind them to be cautious).
    • “finalized”: this is after the finality window, which typically takes about 15 minutes. You can get more info here.

10) Cross‑chain semantics and rollup finality

Designing Your API: Differentiating L2 Local Finality from L1 Final/Withdrawable

When creating your API, it's essential to make a clear distinction between “L2 local finality” and “L1 final/withdrawable.” Here’s how you can approach this:

Understanding the Concepts

  • L2 Local Finality: This refers to the point at which a transaction is considered final within a Layer 2 (L2) solution. It's a quick confirmation that allows for faster and cheaper transactions but doesn't mean that the transaction is fully settled on the Layer 1 (L1) blockchain.
  • L1 Final/Withdrawable: This represents the state when a transaction is fully confirmed on the L1 blockchain, meaning it can be withdrawn or spent. This is the ultimate level of finality that guarantees security and permanence.

API Design Considerations

  1. Endpoints: Create separate endpoints for L2 and L1 finality. For example:

    • GET /api/l2/finality/{transactionId} to check the local finality status of a transaction on L2.
    • GET /api/l1/finality/{transactionId} to see if the transaction is final on L1.
  2. Response Structure: Each endpoint should return a structured response that indicates the status clearly. Here’s a sample response structure:

    {
      "transactionId": "12345",
      "l2Finality": {
        "isFinal": true,
        "confirmationTime": "2023-10-01T12:00:00Z"
      },
      "l1Finality": {
        "isFinal": false,
        "withdrawable": false
      }
    }
  3. Error Handling: Make sure to handle errors gracefully. If a user queries a transaction that doesn't exist, return a message like:

    {
      "error": "Transaction not found."
    }
  4. Documentation: Clearly document the API behavior, including what constitutes local finality versus what is final on L1. Include examples for developers to reference.

Conclusion

By clearly separating the concepts of L2 local finality and L1 final/withdrawable in your API design, you'll provide clarity for users and developers alike. This will ensure that everyone understands the state of their transactions throughout the process, making for a much smoother experience!

  • Optimistic Rollups: Just a heads up, withdrawals only wrap up after the challenge window--typically around 7 days on the mainnet. Make sure your API clearly highlights this status for any bridges or exits. Check it out for more details: docs.optimism.io.
  • Arbitrum’s Defaults: You’re looking at about 45,818 L1 blocks, which translates to roughly a week, plus a little buffer of around 200 blocks. Chains can tweak this to fit their needs. If you're working with enterprise SLAs, make sure to highlight those specific chain parameters. More info can be found here: docs.arbitrum.io.

11) Indexing strategy: when RPC is not enough

When it comes to product analytics, leaderboards, and keeping track of historical scans, you'll usually need an indexer on board.

  • The Graph Substreams handle whole chains using parallelization and a super reliable fault tolerance system. You can easily stream the transformed data straight to warehouses or services. Check it out here: (thegraph.com).
  • If you’re looking for managed options, Goldsky and similar platforms offer subgraphs and pipelines across a bunch of networks, all with GraphQL endpoints. This can really speed up your time to market. Learn more at: (goldsky.com).

When you're designing your API, think about creating a consistent data layer, like this: “/nft-owners?block=finalized.” It’s a good idea to have that backed by an indexer to handle scale, and if there are any freshness issues, you can always rely on RPC as a backup.


12) Documentation, discovery, and versioning

Treat your API as a Product

When it comes to APIs, think of them as a product you’re offering. This means you should aim to make them user-friendly, easy to understand, and accessible. A great way to accomplish this is by using machine-readable specs. Here’s how and why you should do it:

Why Machine-Readable Specs?

Using machine-readable specifications, like OpenAPI or RAML, comes with a bunch of benefits:

  • Clarity: These specs help you convey your API’s capabilities and rules clearly, which makes it easier for developers to get on board.
  • Automation: With machine-readable formats, you can automate documentation and testing processes, saving you time and effort.
  • Interoperability: They enable better integration with tools and services, making it easier for other developers to work with your API.

Getting Started

To use machine-readable specs for your API, you can kick things off with these steps:

  1. Choose a Specification Format: Pick from OpenAPI, GraphQL, or RAML depending on what your API needs. OpenAPI is super popular, and you can find more about it here.
  2. Define Your API: Lay out your endpoints, methods, parameters, and responses in the spec format you chose. It helps to think of this as writing out a user manual for your API.
  3. Generate Documentation: Use tools like Swagger UI or ReDoc that can take your machine-readable specs and turn them into sleek, interactive documentation for developers to browse.
  4. Keep It Updated: As you make changes to your API, remember to update your specs accordingly. Consistency is key!

Conclusion

By treating your API like a product and using machine-readable specs, you're setting yourself up for success. Not only will it make life easier for developers, but it’ll also help your API stand out in a crowded market. So, dive in, get those specs rolling, and watch your API flourish!

  • The Ethereum execution‑apis spec is crafted using OpenRPC, so you might want to create a similar setup in your own service. Make sure to expose rpc.discover for client code generation, testing, and tracking changes. Check it out here: (github.com).
  • OpenRPC hooks you up with generators, documentation, and validation that’s pretty similar to OpenAPI but tailored for JSON-RPC. By building from a spec, you can keep things consistent across your microservices and programming languages. Dive deeper here: (open-rpc.org).

13) Practical design patterns you can copy today

1) Batching Hot Reads

You can batch JSON-RPC calls together to cut down on latency and reduce overhead. Just keep in mind that the responses might come back in a random order; you'll need to match them up using their respective IDs.

[
  {"jsonrpc":"2.0","id":1,"method":"eth_chainId","params":[]},
  {"jsonrpc":"2.0","id":2,"method":"eth_getBalance","params":["0x...", "safe"]},
  {"jsonrpc":"2.0","id":3,"method":"eth_getTransactionCount","params":["0x...", "latest"]}
]

JSON‑RPC 2.0 has built-in support for batches, which is pretty cool. A lot of clients also handle things in parallel behind the scenes. You can check it out over at json-rpc.org.

2) Durable event backfills

Try out a “binary search” chunker that can flexibly shrink block windows when you hit that pesky −32005 “too many results” error. Once you tame that, gently expand the windows to get the most out of your throughput. Providers’ documentation suggests keeping those windows on the smaller side--around 3-5k for Ethereum--to steer clear of timeouts. Check it out here: (docs.chainstack.com).

3) Webhook Fan-out

Get in on the action by subscribing to mined or address activity webhooks. Don’t forget to verify those signatures and then write to a queue (like Kafka or PubSub). Make sure to acknowledge within the provider timeouts to dodge any pesky retries. Alchemy’s Address Activity is pretty powerful and can handle really large address sets per webhook. Check it out here: (alchemy.com)

4) Finality-Tiered Caches

Keep it simple with three caches: the latest cache (TTL seconds), the safe cache (updated with each safe head), and the finalized cache (which is append-only). Make sure to route your GET requests appropriately. The Ethereum documentation clearly backs up the use of safe and finalized tags across core methods. Check it out here: ethereum.org.

5) Fee Policy Abstraction

Let’s make things simpler by offering a single endpoint: “POST /transactions/estimate.” This will internally use eth_feeHistory to pull the blob base fee. In return, we’ll provide organized suggestions for type‑2 or type‑3 transactions, complete with confidence bands and a suggested tip ladder. Check out more details at quicknode.com.

6) AA Readiness

Now it's time to design the “/userops/simulate” and “/userops/send” endpoints. For the simulation part, use eth_call with state overrides. Later on, you can forward it to your chosen bundler infrastructure without needing to tweak your client contracts. You can find more details in the EIP-4337 documentation.


14) Example: production‑grade “Get Transfers” endpoint

Requirements

  • Reorg-safe: Ensures that the system remains secure against reorganization events.
  • Scalable: Able to grow and handle increased loads efficiently.
  • Low-latency: Provides quick responses with minimal delay.
  • Input: You’ll need the contract address, a range for fromBlock/toBlock or a specific time frame, and decide on consistency: latest|safe|finalized.
  • Behavior:

    • If you go with finalized: we'll serve it straight from the indexer and paginate based on block number and log index.
    • If you choose safe: we’ll query the RPC to find the delta since the last safe checkpoint and remove duplicates using (txHash, logIndex).
    • For latest: we’ll include a “reorgWarning: true” field in the response just to keep you informed.
  • Implementation bits:

    • We’ll utilize eth_getLogs with 2,000 to 5,000 block windows and apply OR-topics for the Transfer signature. Plus, we’ll add an address filter to help narrow things down. (ethereum.org)
    • For real-time updates, we’ll push live updates via WS logs subscription using the same filter and merge everything into one store. (chainnodes.org)
    • If you're dealing with high-traffic collections, consider migrating to a Substreams pipeline that produces normalized transfer rows. (thegraph.com)

15) Security and compliance quick‑wins

  • Make sure to validate and normalize all JSON-RPC inputs. Stick to the hex formats and quantity rules laid out in the specs. (eips.ethereum.org)
  • When it comes to structured signatures, like for policy approvals, use EIP-712 to steer clear of any confusing byte strings. (eips.ethereum.org)
  • For off-chain authentication, go with SIWE and make sure to tie it to the origin (wallets should check domains). If you need more detailed permissions, consider using SIWE ReCaps. (eips.ethereum.org)
  • When dealing with proofs, offer eth_getProof for customers needing trust-minimized verification or cross-domain proofs. Just a heads up: you'll need access to the state DB for this, and some clients may optimize by using commitment history. (eips.ethereum.org)

16) Reliability engineering: what to automate on day one

  • If you run into server-side hiccups, make sure to back off and retry instead of constantly hitting those short client-side timeouts--Alchemy suggests using exponential backoff to keep those request storms at bay. (alchemy.com)
  • Let's tackle those mempool errors the right way: treat "already known" as a success that's still pending, and if you get a "replacement underpriced" message, just nudge up that priority fee. (docs.tatum.io)
  • It's a good idea to keep track of metrics for each method: you’ll want to look at P50/P95 latency, error code counts, provider CU usage, and how often your cache hits.
  • For those non-standard methods (like eth_getBlockReceipts), consider feature-flipping them based on what your client detects when it starts up. (docs.chainstack.com)

17) Reference architecture (what we deploy for clients)

  • Edge

    • Handles HTTP/WS termination, WAF, and request validation.
    • An OpenRPC-generated gateway that routes requests to:
      • An RPC pool offering multi-provider failover for each chain.
      • An indexer cluster featuring Substreams and Graph pipelines.
      • A queue that manages webhooks leading to Kafka/PubSub.
    • Finality-tiered caches for the latest, safe, and finalized data.
  • Services

    • Read API: Combines queries across the cache, RPC, and indexer.
    • Write API: Handles idempotent transaction submission and user operations, along with pre-simulation and confirmation tracking.
    • Streams: WS multiplexers and webhook consumers that emit domain events.
  • Observability

    • OpenTelemetry traces for each RPC call, complete with request sampling, blob fee dashboards, and reorg counters.
  • Documentation

    • Check out the OpenRPC spec, featuring rpc.discover for code generation and compatibility tests. You can find it here.

18) What’s next

  • Keep an eye on the blob market as we go through production. When things get congested, make sure to share some estimates on calldata fallback and communicate any cost differences clearly to our customers. (blocknative.com)
  • Ensure our frontends are compliant with EIP‑6963; wallet ecosystems are transitioning over to this pretty quickly. (eip.info)
  • Get ready for modular AA (ERC‑6900) as we work to standardize module ecosystems around execution and validation hooks. (eips.ethereum.org)

7Block Labs can help

Looking for a solid Web3 API gateway that’s ready for production? Need an indexing pipeline for your analytics, or maybe some AA-ready transaction services? 7Block Labs has got you covered. We’ve rolled out these solutions for plenty of enterprises and fast-growing startups alike. We’ll customize a design that meets your latency, cost, and compliance needs--and you won’t just get a solution, you'll also have a versioned OpenRPC spec, dashboards, and runbooks that your team can take over from day one.

Feel free to get in touch if you're interested in an architecture review, a build sprint, or if you want a complete “API + indexing + observability” stack ready to go in weeks instead of months.


Appendix: handy payloads

  1. eth_feeHistory with blob fields (where it's supported)
{
  "jsonrpc":"2.0",
  "id": 4,
  "method":"eth_feeHistory",
  "params":[64, "latest", [5,25,50,75,95]]
}

Check out the fields like baseFeePerGas, reward[], baseFeePerBlobGas, and blobGasUsedRatio to adjust the fees for both type 2 and type 3. You can find more info here.

2) SIWE Message Skeleton (Binds to Origin; Validate Server-Side)

When working with the SIWE (Sign-In with Ethereum), it's crucial to structure your messages correctly. Here’s a basic outline of how a message should look. This ensures it’s tied to the right origin and provides a way to validate everything on the server side.

{
  "domain": "example.com",
  "address": "0xYourEthereumAddress",
  "statement": "Sign in to access your account",
  "uri": "https://example.com/auth",
  "version": "1",
  "chainId": 1,
  "nonce": "randomly-generated-nonce",
  "issuedAt": "2023-10-01T00:00:00Z",
  "expirationTime": "2023-10-02T00:00:00Z",
  "notBefore": "2023-10-01T00:00:00Z",
  "requestId": "unique-request-id",
  "resources": []
}

Key Elements to Remember

  • domain: This should match the origin of your application.
  • address: Put in the Ethereum address of the user.
  • statement: A friendly message explaining why you’re asking them to sign in.
  • uri: The URL where they’ll be redirected after signing in.
  • version: Keep this consistent, typically "1".
  • chainId: Use the chain ID of Ethereum mainnet (1).
  • nonce: Generate a unique nonce for each request to prevent replay attacks.
  • issuedAt: Date and time when the message was created.
  • expirationTime: How long the sign-in request is valid.
  • notBefore: When the message can start being used.
  • requestId: A unique identifier for the request.
  • resources: This can be left empty unless you have specific resources to include.

Keep these points in mind to make sure your SIWE messages are secure and clear. Happy coding!

example.com wants you to sign in with your Ethereum account:
0xYourAddress

Sign in to Example

URI: https://example.com/login
Version: 1
Chain ID: 1
Nonce: 4f7a2c3d
Issued At: 2026-01-07T19:00:00Z

Standardized by ERC‑4361, wallets check the domain and scheme. You can find more details here: (eips.ethereum.org).

3) WebSocket Logs Subscribe (Push)

If you want to tap into real-time updates with WebSocket logs, subscribing to push notifications is the way to go. Here’s how you can do it.

Step-by-Step Guide

  1. Connect to the WebSocket Server
    First things first, you’ll need to establish a connection to your WebSocket server. Here’s a quick snippet to get you started:

    const socket = new WebSocket('wss://your-websocket-url.com');
    
    socket.onopen = () => {
        console.log('Connected to WebSocket server!');
    };
  2. Subscribe to Logs
    Once you're connected, it's time to subscribe to the logs you want. Just send a message to the server with your subscription info:

    socket.send(JSON.stringify({
        action: 'subscribe',
        channel: 'logs'
    }));
  3. Listen for Incoming Messages
    Now that you’re subscribed, you’ll want to listen for messages coming in. Here’s how you can handle that:

    socket.onmessage = (event) => {
        const logData = JSON.parse(event.data);
        console.log('New log:', logData);
    };
  4. Handle Errors
    It's always good to plan for the unexpected. Keep an eye on errors with this simple listener:

    socket.onerror = (error) => {
        console.error('WebSocket error:', error);
    };
  5. Close the Connection
    When you’re done with your logging, don’t forget to close the connection to keep things tidy:

    socket.close();
    console.log('WebSocket connection closed.');

Key Points to Remember

  • Use the right WebSocket URL for your server.
  • Make sure to parse incoming messages correctly.
  • Always have error handling in place to troubleshoot any issues.

That’s all there is to it! With this setup, you’ll be able to subscribe to WebSocket logs and get real-time updates. Happy coding!

{"id":1,"jsonrpc":"2.0","method":"eth_subscribe","params":["logs",{"address":"0xToken","topics":["0xddf252ad..."]}]}

You’ll need WSS for this; you can go with “newHeads” to keep track of block cadence, or if you’re feeling cautious, try “newPendingTransactions.” Check it out here: (chainnodes.org).

4) eth_getProof (account + storage proof)

This is where things get interesting! The eth_getProof method lets you retrieve a proof of an account and its storage, which is super helpful for verifying the state of a particular account without needing to trust any single source.

To use it, you’ll need to provide a couple of key pieces of info:

  • account: This is the Ethereum address of the account you’re interested in.
  • storageKeys: You can also pass an array of storage keys if you want to get proofs for specific storage entries.

Here’s how you can call it:

{
  "jsonrpc": "2.0",
  "method": "eth_getProof",
  "params": [
    "0xYourAccountAddressHere",
    ["0xYourStorageKey1", "0xYourStorageKey2"],
    "latest"
  ],
  "id": 1
}

When you make this call, you’ll get back a proof that looks something like this:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "account": "0xYourAccountAddressHere",
    "balance": "0x1234567890",
    "storageHash": "0xYourStorageHash",
    "storageProof": [
      {
        "key": "0xYourStorageKey1",
        "value": "0xYourStorageValue1",
        "proof": [ /* proof data */ ]
      },
      // additional storage proofs...
    ]
  }
}

This proof can then be used to verify the state of the account at the time you specified, which is pretty neat! So whether you're doing audits or verifying transactions, eth_getProof is definitely a method to keep in your toolbox.

{"jsonrpc":"2.0","id":5,"method":"eth_getProof","params":["0xAccount", ["0xSlot"], "finalized"]}

If you want to make the most out of trust-minimized verification pipelines, make sure your node or client is set up to handle proofs efficiently. Check it out here: (eips.ethereum.org)

5) Access Lists to Pre-Warm Storage (EIP-2930)

EIP-2930 introduces access lists, a neat way to streamline Ethereum transactions. This change helps users specify which addresses and storage keys they plan to access during a transaction. By doing this, the Ethereum network can pre-warm the necessary storage, leading to lower gas costs and improved efficiency.

Here’s a quick rundown of how it works:

  • Access Lists: These are just lists that detail what addresses and storage slots your transaction will touch. When you create a transaction, you can include this list to let the network know what to expect.
  • Gas Savings: Transactions that utilize access lists can potentially save on gas fees because the Ethereum network can optimize the way it processes them. This means you might end up paying less for transactions that would normally require access to multiple storage slots.
  • Improved Efficiency: By pre-warming the storage you need, the network becomes more efficient. This helps in reducing the overall congestion and enhances the user experience, especially during peak times.

Access lists provide a straightforward yet powerful way to make your transactions more cost-effective and faster. If you want to dive deeper into the technical details, you can check out the full EIP-2930 here.

{"jsonrpc":"2.0","id":6,"method":"eth_createAccessList","params":[{"from":"0x...","to":"0x...","data":"0x..."}, "latest"]}

Reduce Cold SLOAD Penalties on Complex Transactions

If you're looking to cut down on those pesky cold SLOAD penalties associated with complex transactions, check out this guide: support.huaweicloud.com. It’s packed with tips and tricks to help you optimize your transactions and improve performance.


If you want us to transform this blueprint into a solid backlog for your team--think SDK interfaces, OpenRPC spec, and a staging deployment--7Block Labs is here to help.

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.