9 APIs you’ll love for AI integrations and automated workflows 9 Jun 2025, 11:00 am

The world is awash in data and all you need to do is ask for it. Of course, you also need to ask in the right way, and in the world of software development, that means using an API. Given the right combination of XML and JSON, there are thousands of APIs ready, willing, and able to answer your queries.

This article showcases some of the most interesting and relevant APIs we could find, especially ones that support integration with AI technology. Surely, one of these has a place in your stack.

Zapier’s natural language actions

In the old days, most APIs were complex enough to come with an instruction manual. Now, Zapier AI Actions adds AI to the mix, so users can request API actions using “natural language.” The AI translates human language so you don’t need to fuss over strict syntax rules when asking for what you want. Is it more flexible? Yes. Could it produce something unexpected? We’ll soon find out. Tossing aside the rigid format of REST semantics has its perks. We’ll probably see more APIs integrating natural language processing in the future.

Seam: An API for IoT

Most APIs are used to manipulate data. The Seam API is a universal system for controlling a matrix of devices, a.k.a., the Internet of Things. Seam lets you control hundreds of devices from one central application, simplifying the process of building an intelligent home or office. It’s expanding the range of applications from the virtual Internet into the actual world.

Hugging Face Transformers API

Do you need to train an AI model? There’s no need to start from scratch. Hugging Face’s Transformers API makes it relatively easy to fire up PyTorch, TensorFlow, or JAX and access dozens of foundation models. Your data is melded with the models’ vast training sets, and you can use formats like ONNX or TorchScript to export the results and run them anywhere.

HumanLayer: Connecting AI agents to humans

Normally, APIs connect computers to other computers on behalf of humans. HumanLayer inverts this paradigm with an API framework that allows computers to contact humans. The idea is that AI agents can handle most issues that come up when performing requested operations. But there will always be cases that require the input of a thoughtful bundle of cells in meatspace. HumanLayer provides the structure and integration format for AIs to seek human contact when they need it.

Bluesky Firehose API

By nature, social media postings are public, but not all sites make it easy to download them. Bluesky’s Firehose API is already being used for hacking projects like AI training. Do you want to use social media posts to analyze public sentiment about a particular topic? Or maybe track the flow of certain ideas or memes? It’s all there, waiting for you to tap into.

OpenAI Batch API

Not every computing job needs to be done immediately; some can be postponed for seconds, minutes, or even hours. Now OpenAI is offering a way to save money by batching workloads that can wait. The OpenAI Batch API claims it could lower your costs by as much as 50 percent. If you’ve been spending too much on AI workloads, Batch API could save you money.

Firecrawl

Some people like writing out their documents in Markdown and using templates to automatically transform those documents into displayable HTML. The Markdown is stored in README.md files that mostly just gather dust. But what if those files were still useful? Firecrawl scrapes a web page and transforms the HTML back into Markdown, a format that’s much easier for data analysis and LLM training. See how that works?

Signature API

Some workflows require a bit of authentication and certification. Did Hal really sign off on that expense? Signature API appends legally binding digital signatures at specific moments in the workflow, so your team has an authenticated third-party timestamp for the moment the button was pressed. As web applications absorb more of the workload, Signature API is the kind of reasonably priced tool that helps us ensure real accountability.

Bruno

There are two sides to every API transaction, and Bruno lets you simulate the client side while testing. Ideally, every bit of documentation would be clear and concise, but when the text isn’t so illuminating, Bruno lets you watch the data flow. Sometimes, just seeing the parameters and the API response answers more questions than the best documentation. Bruno is not so much an API as a tool for exploring other APIs.

(image/jpeg; 7.65 MB)

Building a multi-zone and multi-region SQL Server Failover Cluster Instance in Azure 9 Jun 2025, 11:00 am

Much has been written about SQL Server Always On Availability Groups, but the topic of SQL Server Failover Cluster Instances (FCI) that span both availability zones and regions is far less discussed. However, for organizations that require SQL Server high availability (HA) and disaster recovery (DR) without the added licensing costs of Enterprise Edition, SQL Server FCI remains a powerful and cost-effective solution.

In this article, we will explore how to deploy a resilient SQL Server FCI in Microsoft Azure, leveraging Windows Server Failover Clustering (WSFC) and various Azure services to ensure both high availability and disaster recovery. While deploying an FCI in a single availability zone is relatively straightforward, configuring it to span multiple availability zones—and optionally, multiple regions—introduces a set of unique challenges, including cross-zone and cross-region failover, storage replication, and network latency.

To overcome these challenges, we must first establish a properly configured network foundation that supports multi-region SQL Server FCI deployments. This article includes a comprehensive PowerShell script that automates the necessary networking configuration, ensuring a seamless and resilient infrastructure. This script:

  • Creates two virtual networks (vNets) in different Azure paired regions
  • Establishes secure peering between these vNets for seamless cross-region communication
  • Configures network security groups (NSGs) to control inbound and outbound traffic, ensuring SQL Server and WSFC can function properly
  • Associates the NSGs with subnets, enforcing security policies while enabling necessary connectivity

By automating these steps, we lay the groundwork for SQL Server FCI to operate effectively in a multi-region Azure environment. Additionally, we will cover key technologies such as Azure Shared Disks, SIOS DataKeeper, Azure Load Balancer, and quorum configuration within WSFC to complete the deployment. By the end of this discussion, you will have a clear roadmap for architecting a SQL Server FCI deployment that is highly available, disaster-resistant, and optimized for minimal downtime across multiple Azure regions.

Pre-requisites

Before deploying a SQL Server Failover Cluster Instance (FCI) across availability zones and regions in Azure, ensure you have the following prerequisites in place:

  1. Azure subscription with necessary permissions
    • You must have an active Azure subscription with sufficient permissions to create virtual machines, manage networking, and configure storage. Specifically, you need Owner or Contributor permissions on the target resource group.
  2. Access to SQL Server and SIOS DataKeeper installation media
    • SQL Server installation media: Ensure you have the SQL Server Standard or Enterprise Edition installation media available. You can download it from the Microsoft Evaluation Center.
    • SIOS DataKeeper installation media: You will need access to SIOS DataKeeper Cluster Edition for block-level replication. You can request an evaluation copy from SIOS Technology.

Configuring networking for SQL Server FCI across Azure paired regions

To deploy a SQL Server Failover Cluster Instance (FCI) across availability zones and regions, you need to configure networking appropriately. This section outlines the automated network setup using PowerShell, which includes:

  1. Creating two virtual networks (vNets) in different Azure paired regions
  2. Creating Subnets – two in the primary region and one in the DR region
  3. Peering between vNets to enable seamless cross-region communication
  4. Configuring a network security group (NSG) to:
    • Allow full communication between vNets (essential for SQL and cluster traffic)
    • Enable secure remote desktop (RDP) access for management purposes

The PowerShell script provided in this session automates these critical networking tasks, ensuring that your SQL Server FCI deployment has a robust, scalable, and secure foundation. Once the network is in place, we will proceed to the next steps in configuring SQL Server FCI, storage replication, and failover strategies.


# Define Variables
$PrimaryRegion = "East US 2"
$DRRegion = "Central US"
$ResourceGroup = "MySQLFCIResourceGroup"
$PrimaryVNetName = "PrimaryVNet"
$DRVNetName = "DRVNet"
$PrimaryNSGName = "SQLFCI-NSG-Primary"
$DRNSGName = "SQLFCI-NSG-DR"
$PrimarySubnet1Name = "SQLSubnet1"
$DRSubnetName = "DRSQLSubnet"
$PrimaryAddressSpace = "10.1.0.0/16"
$PrimarySubnet1Address = "10.1.1.0/24"
$DRAddressSpace = "10.2.0.0/16"
$DRSubnetAddress = "10.2.1.0/24"
$SourceRDPAllowedIP = "98.110.113.146/32"  # Replace with your actual IP
$DNSServer = "10.1.1.102" #set this to your Domain controller

# Create Resource Group if not exists
Write-Output "Creating Resource Group ($ResourceGroup) if not exists..."
New-AzResourceGroup -Name $ResourceGroup -Location $PrimaryRegion -ErrorAction SilentlyContinue

# Create Primary vNet with a subnet
Write-Output "Creating Primary VNet ($PrimaryVNetName) in $PrimaryRegion..."
$PrimaryVNet = New-AzVirtualNetwork -ResourceGroupName $ResourceGroup -Location $PrimaryRegion -Name $PrimaryVNetName -AddressPrefix $PrimaryAddressSpace -DnsServer $DNSServer
$PrimarySubnet1 = Add-AzVirtualNetworkSubnetConfig -Name $PrimarySubnet1Name -AddressPrefix $PrimarySubnet1Address -VirtualNetwork $PrimaryVNet
Set-AzVirtualNetwork -VirtualNetwork $PrimaryVNet

# Create DR vNet with a subnet
Write-Output "Creating DR VNet ($DRVNetName) in $DRRegion..."
$DRVNet = New-AzVirtualNetwork -ResourceGroupName $ResourceGroup -Location $DRRegion -Name $DRVNetName -AddressPrefix $DRAddressSpace -DnsServer $DNSServer
$DRSubnet = Add-AzVirtualNetworkSubnetConfig -Name $DRSubnetName -AddressPrefix $DRSubnetAddress -VirtualNetwork $DRVNet
Set-AzVirtualNetwork -VirtualNetwork $DRVNet

# Configure Peering Between vNets
Write-Output "Configuring VNet Peering..."
$PrimaryVNet = Get-AzVirtualNetwork -Name $PrimaryVNetName -ResourceGroupName $ResourceGroup
$DRVNet = Get-AzVirtualNetwork -Name $DRVNetName -ResourceGroupName $ResourceGroup

# Create Peering from Primary to DR
Write-Output "Creating Peering from $PrimaryVNetName to $DRVNetName..."
$PrimaryToDRPeering = Add-AzVirtualNetworkPeering -Name "PrimaryToDR" -VirtualNetwork $PrimaryVNet -RemoteVirtualNetworkId $DRVNet.Id
Start-Sleep -Seconds 10

# Create Peering from DR to Primary
Write-Output "Creating Peering from $DRVNetName to $PrimaryVNetName..."
$DRToPrimaryPeering = Add-AzVirtualNetworkPeering -Name "DRToPrimary" -VirtualNetwork $DRVNet -RemoteVirtualNetworkId $PrimaryVNet.Id
Start-Sleep -Seconds 10

# Retrieve and update Peering settings
$PrimaryToDRPeering = Get-AzVirtualNetworkPeering -ResourceGroupName $ResourceGroup -VirtualNetworkName $PrimaryVNetName -Name "PrimaryToDR"
$DRToPrimaryPeering = Get-AzVirtualNetworkPeering -ResourceGroupName $ResourceGroup -VirtualNetworkName $DRVNetName -Name "DRToPrimary"

$PrimaryToDRPeering.AllowVirtualNetworkAccess = $true
$PrimaryToDRPeering.AllowForwardedTraffic = $true
$PrimaryToDRPeering.UseRemoteGateways = $false
Set-AzVirtualNetworkPeering -VirtualNetworkPeering $PrimaryToDRPeering

$DRToPrimaryPeering.AllowVirtualNetworkAccess = $true
$DRToPrimaryPeering.AllowForwardedTraffic = $true
$DRToPrimaryPeering.UseRemoteGateways = $false
Set-AzVirtualNetworkPeering -VirtualNetworkPeering $DRToPrimaryPeering

Write-Output "VNet Peering established successfully."

# Create Network Security Groups (NSGs)
Write-Output "Creating NSGs for both regions..."
$PrimaryNSG = New-AzNetworkSecurityGroup -ResourceGroupName $ResourceGroup -Location $PrimaryRegion -Name $PrimaryNSGName
$DRNSG = New-AzNetworkSecurityGroup -ResourceGroupName $ResourceGroup -Location $DRRegion -Name $DRNSGName

# Define NSG Rules (Allow VNet communication and RDP)
$Rule1 = New-AzNetworkSecurityRuleConfig -Name "AllowAllVNetTraffic" -Priority 100 -Direction Inbound -Access Allow -Protocol * `
    -SourceAddressPrefix VirtualNetwork -SourcePortRange * -DestinationAddressPrefix VirtualNetwork -DestinationPortRange *

$Rule2 = New-AzNetworkSecurityRuleConfig -Name "AllowRDP" -Priority 200 -Direction Inbound -Access Allow -Protocol TCP `
    -SourceAddressPrefix $SourceRDPAllowedIP -SourcePortRange * -DestinationAddressPrefix "*" -DestinationPortRange 3389

# Apply Rules to NSGs
$PrimaryNSG.SecurityRules = @($Rule1, $Rule2)
$DRNSG.SecurityRules = @($Rule1, $Rule2)

Set-AzNetworkSecurityGroup -NetworkSecurityGroup $PrimaryNSG
Set-AzNetworkSecurityGroup -NetworkSecurityGroup $DRNSG

Write-Output "NSGs created and configured successfully."

# Associate NSGs with Subnets
Write-Output "Associating NSGs with respective subnets..."
$PrimaryVNet = Get-AzVirtualNetwork -Name $PrimaryVNetName -ResourceGroupName $ResourceGroup
$DRVNet = Get-AzVirtualNetwork -Name $DRVNetName -ResourceGroupName $ResourceGroup
$PrimaryNSG = Get-AzNetworkSecurityGroup -Name $PrimaryNSGName -ResourceGroupName $ResourceGroup
$DRNSG = Get-AzNetworkSecurityGroup -Name $DRNSGName -ResourceGroupName $ResourceGroup

$PrimarySubnet1 = Set-AzVirtualNetworkSubnetConfig -VirtualNetwork $PrimaryVNet -Name $PrimarySubnet1Name `
    -AddressPrefix $PrimarySubnet1Address -NetworkSecurityGroup $PrimaryNSG

$DRSubnet = Set-AzVirtualNetworkSubnetConfig -VirtualNetwork $DRVNet -Name $DRSubnetName `
    -AddressPrefix $DRSubnetAddress -NetworkSecurityGroup $DRNSG

Set-AzVirtualNetwork -VirtualNetwork $PrimaryVNet
Set-AzVirtualNetwork -VirtualNetwork $DRVNet

Write-Output "NSGs successfully associated with all subnets!"

Write-Output "Azure network setup completed successfully!"

Deploying SQL Server virtual machines in Azure with high availability

To achieve HA and DR, we deploy SQL Server Failover Cluster Instance (FCI) nodes across multiple Availability Zones (AZs) within Azure regions. By distributing the SQL Server nodes across separate AZs, we qualify for Azure’s 99.99% SLA for virtual machines, ensuring resilience against hardware failures and zone outages.

Each SQL Server virtual machine (VM) is assigned a static private and public IP address, ensuring stable connectivity for internal cluster communication and remote management. Additionally, each SQL Server node is provisioned with an extra 20GB Premium SSD, which will be used by SIOS DataKeeper Cluster Edition to create replicated cluster storage across AZs and regions. Because Azure does not natively provide shared storage spanning multiple AZs or regions, SIOS DataKeeper enables block-level replication, ensuring that all clustered SQL Server nodes have synchronized copies of the data, allowing for seamless failover with no data loss.

In a production environment, multiple domain controllers (DCs) would typically be deployed, spanning both AZs and regions to ensure redundancy and fault tolerance for Active Directory services. However, for the sake of this example, we will keep it simple and deploy a single domain controller (DC1) in Availability Zone 3 in East US 2 to provide the necessary authentication and cluster quorum support.

The PowerShell script below automates the deployment of these SQL Server VMs, ensuring that:

  • SQLNode1 is deployed in Availability Zone 1 in East US 2
  • SQLNode2 is deployed in Availability Zone 2 in East US 2
  • SQLNode3 is deployed in Availability Zone 1 in Central US
  • DC1 is deployed in Availability Zone 3 in East US 2

By following this deployment model, SQL Server FCI can span multiple AZs and even multiple regions, providing a highly available and disaster-resistant database solution.


# Define Variables
$ResourceGroup = "MySQLFCIResourceGroup"
$PrimaryRegion = "East US 2"
$DRRegion = "Central US"
$VMSize = "Standard_D2s_v3"  # VM Size
$AdminUsername = "sqladmin"
$AdminPassword = ConvertTo-SecureString "YourSecurePassword123!" -AsPlainText -Force
$Credential = New-Object System.Management.Automation.PSCredential ($AdminUsername, $AdminPassword)

# Get Virtual Networks
$PrimaryVNet = Get-AzVirtualNetwork -Name "PrimaryVNet" -ResourceGroupName $ResourceGroup
$DRVNet = Get-AzVirtualNetwork -Name "DRVNet" -ResourceGroupName $ResourceGroup

# Get Subnets
$PrimarySubnet1 = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $PrimaryVNet -Name "SQLSubnet1"
$DRSubnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $DRVNet -Name "DRSQLSubnet"

# Define Static Private IPs
$IP1 = "10.1.1.100"   # SQLNode1 in East US, AZ1
$IP2 = "10.1.1.101"   # SQLNode2 in East US, AZ2
$IP3 = "10.2.1.100"   # SQLNode3 in West US (Availability Zones may not be supported)
$IP4 = "10.1.1.102"   # DC1 in East US, AZ3 (No extra disk)

# Function to Create a VM with Static Private & Public IP, Availability Zone, and attach an extra disk (except for DC1)
Function Create-SQLVM {
    param (
        [string]$VMName,
        [string]$Location,
        [string]$SubnetId,
        [string]$StaticPrivateIP,
        [string]$AvailabilityZone,
        [bool]$AttachExtraDisk
    )

    # Create Public IP Address (Static)
    Write-Output "Creating Public IP for $VMName..."
    $PublicIP = New-AzPublicIpAddress -ResourceGroupName $ResourceGroup -Location $Location `
        -Name "$VMName-PublicIP" -Sku Standard -AllocationMethod Static

    # Create Network Interface with Static Private & Public IP
    Write-Output "Creating NIC for $VMName in $Location (Zone $AvailabilityZone)..."
    $NIC = New-AzNetworkInterface -ResourceGroupName $ResourceGroup -Location $Location `
        -Name "$VMName-NIC" -SubnetId $SubnetId -PrivateIpAddress $StaticPrivateIP -PublicIpAddressId $PublicIP.Id

    # Create VM Configuration with Availability Zone (if supported)
    Write-Output "Creating VM $VMName in $Location (Zone $AvailabilityZone)..."
    
    if ($Location -eq $DRRegion) {
        # Check if Availability Zones are supported for West US
        $VMConfig = New-AzVMConfig -VMName $VMName -VMSize $VMSize -Zone $AvailabilityZone | `
            Set-AzVMOperatingSystem -Windows -ComputerName $VMName -Credential $Credential | `
            Set-AzVMSourceImage -PublisherName "MicrosoftWindowsServer" -Offer "WindowsServer" -Skus "2022-Datacenter" -Version "latest" | `
            Add-AzVMNetworkInterface -Id $NIC.Id | `
            Set-AzVMOSDisk -CreateOption FromImage
        Write-Output "Warning: Availability Zones not supported in West US. Deploying without AZ."
    } else {
        # Use Availability Zone for East US
        $VMConfig = New-AzVMConfig -VMName $VMName -VMSize $VMSize -Zone $AvailabilityZone | `
            Set-AzVMOperatingSystem -Windows -ComputerName $VMName -Credential $Credential | `
            Set-AzVMSourceImage -PublisherName "MicrosoftWindowsServer" -Offer "WindowsServer" -Skus "2022-Datacenter" -Version "latest" | `
            Add-AzVMNetworkInterface -Id $NIC.Id | `
            Set-AzVMOSDisk -CreateOption FromImage
    }

    # Conditionally Attach an Extra 20 GB Premium SSD LRS Disk in the same Availability Zone (Not for DC1)
    if ($AttachExtraDisk) {
        Write-Output "Attaching extra 20GB Premium SSD disk to $VMName in Zone $AvailabilityZone..."
        $DiskConfig = New-AzDiskConfig -SkuName "Premium_LRS" -Location $Location -Zone $AvailabilityZone -CreateOption Empty -DiskSizeGB 20
        $DataDisk = New-AzDisk -ResourceGroupName $ResourceGroup -DiskName "$VMName-Disk" -Disk $DiskConfig
        $VMConfig = Add-AzVMDataDisk -VM $VMConfig -Name "$VMName-Disk" -CreateOption Attach -ManagedDiskId $DataDisk.Id -Lun 1
    }

    # Deploy VM
    New-AzVM -ResourceGroupName $ResourceGroup -Location $Location -VM $VMConfig
}

# Deploy SQL Nodes in the specified Availability Zones with Static Public IPs
Create-SQLVM -VMName "SQLNode1" -Location $PrimaryRegion -SubnetId $PrimarySubnet1.Id -StaticPrivateIP $IP1 -AvailabilityZone "1" -AttachExtraDisk $true
Create-SQLVM -VMName "SQLNode2" -Location $PrimaryRegion -SubnetId $PrimarySubnet1.Id -StaticPrivateIP $IP2 -AvailabilityZone "2" -AttachExtraDisk $true
Create-SQLVM -VMName "SQLNode3" -Location $DRRegion -SubnetId $DRSubnet.Id -StaticPrivateIP $IP3 -AvailabilityZone "1" -AttachExtraDisk $true  # West US AZ fallback

# Deploy DC1 in East US, AZ3 with Static Public IP but without an extra disk
Create-SQLVM -VMName "DC1" -Location $PrimaryRegion -SubnetId $PrimarySubnet1.Id -StaticPrivateIP $IP4 -AvailabilityZone "3" -AttachExtraDisk $false

Write-Output "All VMs have been successfully created with Static Public & Private IPs in their respective Availability Zones!"

Completing the SQL Server FCI deployment

With the SQL Server virtual machines deployed across multiple AZs and regions, the next steps involve configuring networking, setting up Active Directory, enabling clustering, and installing SQL Server FCI. These steps will ensure HA and DR for SQL Server across Azure regions. 

1. Create a domain on DC1

The domain controller (DC1) will provide authentication and Active Directory services for the SQL Server Failover Cluster. To set up the Active Directory Domain Services (AD DS) on DC1, we will:

  1. Install the Active Directory Domain Services role.
  2. Promote DC1 to a domain controller.
  3. Create a new domain (e.g., corp.local).
  4. Configure DNS settings to ensure domain resolution.

Once completed, this will allow the cluster nodes to join the domain and participate in Windows Server Failover Clustering (WSFC).

2. Join SQLNode1, SQLNode2, and SQLNode3 to the domain

Now that DC1 is running Active Directory, we will join SQLNode1, SQLNode2, and SQLNode3 to the new domain (e.g., datakeeper.local). This is a critical step, as Windows Server Failover Clustering (WSFC) and SQL Server FCI require domain membership for proper authentication and communication.

Steps:

  1. Join each SQL node to the Active Directory domain.
  2. Restart the servers to apply changes.

3. Enable Windows Server Failover Clustering (WSFC)

With all nodes now part of the Active Directory domain, the next step is to install and enable WSFC on all three SQL nodes. This feature provides the foundation for SQL Server FCI, allowing for automatic failover between nodes.

Steps:

1. Install the Failover Clustering feature on all SQL nodes.


Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools

2. Enable the Cluster service.


New-Cluster -Name SQLCluster -Node SQLNode1,SQLNode2,SQLNode3 -NoStorage

SIOS SQL Server FCI 01

SIOS

4. Create a cloud storage account for Cloud Witness

To ensure quorum resiliency, we will configure a Cloud Witness as the cluster quorum mechanism. This Azure storage account-based witness is a lightweight, highly available solution that ensures the cluster maintains quorum even in the event of an AZ or regional failure.

Steps:

1. Create an Azure storage account in a third, independent region.


New-AzStorageAccount -ResourceGroupName "MySQLFCIResourceGroup" `
                     -Name "cloudwitnessstorageacct" `
                     -Location "westus3" `
                     -SkuName "Standard_LRS" `
                     -Kind StorageV2

2. Get the key that will be used to create the Cloud Witness.


Get-AzStorageAccountKey -ResourceGroupName "MySQLFCIResourceGroup" -Name "cloudwitnessstorageacct"                     
KeyName Value                                                                                    Permissions CreationTime
------- -----                                                                                    ----------- ------------
key1    dBIdjU/lu+86j8zcM1tdg/j75lZrB9sVKHUKhBEneHyMOxYTeZhtVeuzt7MtBOO9x/8QtYlrbNYY+AStddZZOg==        Full 3/28/2025 2:38:00 PM
key2    54W5NdJ6xbFwjTrF0ryIOL6M7xGOylc1jxnD8JQ94ZOy5dQOo3BAJB2TYzb22KaDeYrv09m6xVsW+AStBxRq6w==        Full 3/28/2025 2:38:00 PM

3. Configure the WSFC cluster quorum settings to use Cloud Witness as the tie-breaker. This PowerShell script can be run on any of the cluster nodes.


$parameters = @{
    CloudWitness = $true
    AccountName  = 'cloudwitnessstorageacct'
    AccessKey    = 'dBIdjU/lu+86j8zcM1tdg/j75lZrB9sVKHUKhBEneHyMOxYTeZhtVeuzt7MtBOO9x/8QtYlrbNYY+AStddZZOg=='
    Endpoint     = 'core.windows.net'
}

Set-ClusterQuorum @parameters 

 5. Validate the configuration

With WSFC enabled and Cloud Witness configured, we can now create the base Windows Failover Cluster. This involves running Cluster Validation to ensure all cluster nodes meet requirements.


Test-Cluster

Once the base cluster is operational, we move on to configuring storage replication with SIOS DataKeeper.

6. Install SIOS DataKeeper on all three cluster nodes

Because Azure does not support shared storage across AZs and regions, we use SIOS DataKeeper Cluster Edition to replicate block-level storage and create a stretched cluster.

Steps:

  1. Install SIOS DataKeeper Cluster Edition on SQLNode1, SQLNode2, and SQLNode3.
  2. Restart the nodes after installation.
  3. Ensure the SIOS DataKeeper service is running on all nodes.

7. Format the 20GB Disk as the F: drive

Each SQL node has an additional 20GB Premium SSD, which will be used for SQL Server data storage replication.

Steps:

  1. Initialize the extra 20GB disk on SQLNode1.
  2. Format it as the F: drive.
  3. Assign the same drive letter (F:) on SQLNode2 and SQLNode3 to maintain consistency.

8. Create the DataKeeper job to replicate the F: drive

Now that the F: drive is configured, we create a DataKeeper replication job to synchronize data between the nodes:

  1. Synchronous replication between SQLNode1 and SQLNode2 (for low-latency, intra-region failover).
  2. Asynchronous replication between SQLNode1 and SQLNode3 (for cross-region disaster recovery).

Steps:

  1. Launch DataKeeper and create a new replication job.
  2. Configure synchronous replication for the F: drive between SQLNode1 and SQLNode2.
  3. Configure asynchronous replication between SQLNode1 and SQLNode3.

The screenshots below walk through the process of creating the DataKeeper job that replicates the F: drive between the three servers.

SIOS SQL Server FCI 02

SIOS

SIOS SQL Server FCI 03

SIOS

SIOS SQL Server FCI 04

SIOS

SIOS SQL Server FCI 05

SIOS

SIOS SQL Server FCI 06

SIOS

To add the second target, right-click on the existing Job and choose “Create a Mirror.”

SIOS SQL Server FCI 07

SIOS

SIOS SQL Server FCI 08

SIOS

SIOS SQL Server FCI 09

SIOS

SIOS SQL Server FCI 10

SIOS

SIOS SQL Server FCI 11

SIOS

Once replication is active, SQLNode2 and SQLNode3 will have an identical copy of the data stored on SQLNode1’s F: drive.

If you look in Failover Cluster Manager, you will see “DataKeeper Volume F” in Available Storage. Failover clustering will treat this like it is a regular shared disk.

SIOS SQL Server FCI 12

SIOS

 9. Install SQL Server on SQLNode1 as a new clustered instance

With WSFC configured and storage replication active, we can now install SQL Server FCI.

Steps:

  1. On SQLNode1, launch the SQL Server installer.
  2. Choose “New SQL Server failover cluster installation.”
  3. Complete the installation and restart SQLNode1.

You will notice during the installation, that the “DataKeeper Volume F” is presented as an available storage location.

SIOS SQL Server FCI 13

SIOS

10. Install SQL Server on SQLNode2 and SQLNode3 (Add Node to Cluster)

To complete the SQL Server FCI, we must add the remaining nodes to the cluster.

Steps:

  1. Run SQL Server setup on SQLNode2 and SQLNode3.
  2. Choose “Add node to an existing SQL Server failover cluster.”
  3. Validate cluster settings and complete the installation.

Once SQL Server is installed on all three cluster nodes, Failover Cluster Manager will look like this.

SIOS SQL Server FCI 14

SIOS

11. Update SQL Server to use a distributed network name (DNN)

By default, SQL Server FCI requires an Azure load balancer (ALB) to manage client connections. However, Azure now supports distributed network names (DNNs), eliminating the need for an ALB.

Steps:

  1. Update SQL Server FCI to use DNN instead of a traditional floating IP.
  2. Ensure name resolution works across all nodes.
  3. Validate client connectivity to SQL Server using DNN.

Detailed instructions on how to update SQL Server FCI to use DNN can be found in the Microsoft documentation.


Add-ClusterResource -Name sqlserverdnn -ResourceType "Distributed Network Name" -Group "SQL Server (MSSQLSERVER)" 

Get-ClusterResource -Name sqlserverdnn | Set-ClusterParameter -Name DnsName -Value FCIDNN 

Start-ClusterResource -Name sqlserverdnn 

You can now connect to the clustered SQL instance using the DNN “FCIDNN.”

12. Install SQL Server Management Studio (SSMS) on all three nodes

For easier SQL Server administration, install SQL Server Management Studio (SSMS) on all three nodes.

Steps:

  1. Download the latest version of SSMS from Microsoft.
  2. Install SSMS on SQLNode1, SQLNode2, and SQLNode3.
  3. Connect to the SQL Server cluster using DNN.

 13. Test failover and switchover scenarios

Finally, we validate HA and DR functionality by testing failover and switchover scenarios:

  1. Perform a planned failover (manual switchover) from SQLNode1 to SQLNode2.
  2. Simulate an AZ failure and observe automatic failover.
  3. Test cross-region failover from SQLNode1 (East US 2) to SQLNode3 (Central US).

This confirms that SQL Server FCI can seamlessly failover within AZs and across regions, ensuring minimal downtime and data integrity.

Four nines uptime

By following these steps, we have successfully deployed, configured, and tested a multi-AZ, multi-region SQL Server FCI in Azure. This architecture provides 99.99% uptime, seamless failover, and disaster recovery capabilities, making it ideal for business-critical applications. 

Dave Bermingham is senior technical evangelist at SIOS Technology.

New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.

(image/jpeg; 9.49 MB)

JDK 25: The new features in Java 25 7 Jun 2025, 2:03 am

Java Development Kit (JDK) 25, a planned long-term support release of standard Java due in September, has reached the initial rampdown or bug-fixing phase with 18 features. The final feature, added June 5, is an enhancement to the JDK Flight Recorder (JFR) to capture CPU-time profiling information on Linux.

JDK 25 comes on the heels of JDK 24, a six-month-support release that arrived March 18. As a long-term support (LTS) release, JDK 25 will get at least five years of Premier support from Oracle. JDK 25 is due to arrive as a production release on September 16, after a second rampdown phase beginning July 17 and two release candidates planned for August 17 and August 21. The most recent LTS release was JDK 21, which arrived in September 2023.

Early access builds of JDK 25 can be downloaded from jdk.java.net. The features previously slated for JDK 25 include: a preview of PEM (Privacy-Enhanced Mail) encodings of cryptographic objects, the Shenandoah garbage collector, ahead-of-time command-line ergonomics, ahead-of-time method profiling, JDK Flight Recorder (JFR) cooperative sampling, JFR method timing and tracing, compact object headers, a third preview of primitive types in patterns, instanceof, and switch. Also, scoped values, a vector API, a key derivation function API, structured concurrency, flexible constructor bodies, module import declarations, compact source files and instance main methods, stable values, and removal of the 32-bit x86 port.

JFR CPU-time profiling feature enhances the JDK Flight Recorder to capture CPU-time profiling information on Linux. The JFR is the JDK’s profiling and monitoring facility. Enhancing the JFR to use the Linux kernel’s CPU timer to safely produce CPU-time profiles of Java programs would help developers optimize the efficiency of the Java applications they deploy on Linux. CPU-time profiling on the JFR may be added for other platforms in the future. The CPU time-profiling feature is the third feature involving the JFR in JDK 25, with the others being the cooperative sampling and method timing and tracing capabilities. This is an experimental feature.

With PEM encodings of cryptographic objects, JDK 25 previews a concise API for encoding objects that represent cryptographic keys, certificates, and certificate revocation into the widely used PEM format transport, and for decoding from the format back into objects. The Java platform has not had an easy-to-use API for decoding and encoding in the PEM format. A main goal of the feature is ease of use. Another goal is support for conversions between PEM text and cryptographic objects that have standard representations in the binary formats PKCS#8 (for private keys), X.509 (for public keys, certificates, and certificate revocation lists), and PKCS#8 v2.0 (for encrypted private keys and asymmetric keys).

Generational Shenandoah changes the generational mode of the Shenandoah garbage collector (GC) from an experimental feature to a product feature. Previewed in JDK 24, the GC has had many stability and performance enhancements, according to the proposal. The GC in JDK 24 was intended to offer collection capabilities to improve sustainable throughput, load-spike resilience, and memory utilization Several users have reported running demanding workloads with this GC. Generational Shenandoah once was planned for JDK 21 in 2023 but was dropped because the capability was deemed not ready at the time.

Ahead-of-time command-line ergonomics is intended to make it easier to create ahead-of-time (AOT) caches, which accelerate the startup of Java applications by simplifying commands needed for common use cases. Goals include simplifying the process of creating an AOT cache with no loss of expressiveness, and not introducing fundamentally new AOT workflows but making it easier to access existing ones. This proposal follows the ahead-of-time caches introduced by ahead-of-time class loading and linking in JDK 24.

Ahead-of-time method profiling would improve warmup time by making method execution profiles from a previous run of an application available right away when the HotSpot JVM starts. This will enable the just-in-time (JIT) compiler to generate native code instantly upon application startup rather than having to wait for the collection of profiles. Here, goals include helping applications warm up quicker; not requiring any changes to the code of applications, libraries, or frameworks; and not introducing any new constraints on application execution. The proposal also would not introduce new AOT workflows, but would use existing AOT cache creation commands. The AOT cache introduced in JDK 24 would be extended to collect method profiles during training runs.

JFR cooperative sampling would improve the stability of the JDK Flight Recorder when it asynchronously samples Java thread stacks. This would be achieved by walking call stacks only at safepoints while minimizing safepoint bias.

JFR method timing and tracing would extend the JDK Flight Recorder with facilities for method timing and tracing rather than via bytecode instrumentation. Goals of this feature include allowing execution times and stack traces to be recorded for specific methods without needing source code modifications, and recording exact statistics for method invocations. Another goal is allowing methods to be selected via command-line argument, configuration files, the jcmd tool, and over the network via the Java Management Extensions API. Timing and tracing method invocations can help identify performance bottlenecks, optimize code, and find the root causes of bugs.

Compact object headers, an experimental feature in JDK 24, would become a product feature in JDK 25. In JDK 24, this capability was introduced to reduce the size of object headers in the HotSpot JVM from between 96 bits and 128 bits down to 64 bits on 64-bit architectures. This reduces the heap size, improves deployment density, and increases data locality. Since JDK 24, compact object headers have proven their stability and performance, the proposal says.

A third preview of primitive types in patterns, instanceof, and switch would enhance pattern matching by allowing primitive types in all pattern contexts and extend instanceof and switch to work with all primitive types. Originally proposed in JDK 23 and followed up in JDK 24, this would still be a preview language feature in JDK 25. Among goals are enabling data exploration by allowing type patterns for all types, whether primitive or reference, and providing easy-to-use constructs that eliminate the risk of losing information due to unsafe casts.

Scoped values, to be previewed for a fifth time, allows a method to share immutable data with its callees within a thread and with child threads. Scoped values are easier to reason about than thread-local variables, according to the OpenJDK JDK Enhancement Proposal (JEP). They also have lower space and time costs, especially when used together with virtual threads and structured concurrency. Goals of the plan include ease of use, comprehensibility, robustness, and performance. The scoped values API was proposed for incubation in JDK 20, proposed for preview in JDK 21, and subsequently refined for JDK 22 through JDK 24. The feature will be finalized in JDK 25, with one change: the ScopedValue.orElse method no longer accepts null as its argument.

The vector API is designed to express vector computations that reliably compile at runtime to optimal vector instructions on supported CPUs, thus achieving performance superior to equivalent scalar computations. The API will be incubated for the 10th time in JDK 25, after having been incubated in every release dating back to JDK 16. Two notable implementation changes are featured in the JDK 25 implementation of the API. First, the implementation now links to native mathematical-function libraries via the Foreign Function and Memory API rather than custom C++ code inside the HotSpot JVM, thus improving maintainability. Second, addition, subtraction, division, multiplication, square root, and fused multiply/add operations on Float16 values now are auto-vectorized on supporting x64 CPUs. Additionally, VectorShuffle now supports access to and from MemorySegment.

The key derivation function API provides for functions that are cryptographic algorithms for deriving additional keys from a secret key and other data. One of the goals of the API is enabling applications to use key derivation function algorithms such as the HMAC-based Extract-and-Expand Key Derivation Function and Argon2. Other goals include allowing security providers to implement key derivation function algorithms in either Java code or native code, and enabling the use of key derivation functions in key encapsulation mechanism implementations such as ML-KEM, in higher level protocols such as Hybrid Key Exchange in TLS 1.3, and in cryptographic schemes such as Hybrid Public Key Encryption. The API will be finalized in JDK 25 after being previewed in JDK 24.

Structured concurrency was previewed previously in JDK 21 through JDK 24, after being incubated in JDK 19 and JDK 20. Now in its fifth preview, structured concurrency treats groups of related tasks running in different threads as single units of work. This streamlines error handling and cancellation, improves reliability, and enhances observability, the proposal states. The primary goal is to promote a style of concurrent programming that can eliminate common risks arising from cancellation and shutdown, such as thread leaks and cancellation delays. A second goal is to improve the observability of concurrent code. JDK 25 introduces several API changes. In particular, a StructuredTaskScope is now opened via static factory methods rather than public constructors. Also, the zero-parameter open factory method covers the common case by creating a StructuredTaskScope that waits for all subtasks to succeed or any subtask to fail.

Flexible constructor bodies was previewed in JDK 22 as “statements before super(…)” as well as in JDK 23 and JDK 24. The feature is intended to be finalized in JDK 25. In flexible constructor bodies, the body of a constructor allows statements to appear before an explicit constructor invocation such as super (…) or this (…). These statements cannot reference the object under construction but they can initialize its fields and perform other safe computations. This change lets many constructors be expressed more naturally and allows fields to be initialized before becoming visible to other code in the class, such as methods called from a superclass constructor, thereby improving safety. Goals of the feature include removing unnecessary restrictions on code in constructors; providing additional guarantees that state of a new object is fully initialized before any code can use it; and reimagining the process of how constructors interact with each other to create a fully initialized object.

Module import declarations, which was previewed in JDK 23 and JDK 24, enhances the Java language with the ability to succinctly import all of the packages exported by a module. This simplifies the reuse of modular libraries but does not require the importing code to be in a module itself. Goals include simplifying the reuse of modular libraries by letting entire modules be imported at once; avoiding the noise of multiple type import-on-demand declarations when using diverse parts of the API exported by a module; allowing beginners to more easily use third-party libraries and fundamental Java classes without having to learn where they are located in a package hierarchy; and ensuring that module import declarations work smoothly alongside existing import declarations. Developers who use the module import feature should not be required to modularize their own code.

Compact source files and instance main methods evolves the Java language so beginners can write their first programs without needing to understand language features designed for large programs. Beginners can write streamlined declarations for single-class programs and seamlessly expand programs to use more advanced features as their skills grow. Likewise, experienced developers can write small programs succinctly without the need for constructs intended for programming in the large, the proposal states. This feature, due to be finalized in JDK 25, was previewed in JDK 21, JDK 22, JDK 23, and JDK 24, albeit under slightly different names. In JDK 24 it was called “simple source files and instance main methods.”

Stable values are objects that hold immutable data. Because stable values are treated as constants by the JVM, they enable the same performance optimizations that are enabled by declaring a field final. But compared to final fields, stable values offer greater flexibility regarding the timing of their initialization. A chief goal of this feature, which is in a preview stage, is improving the startup of Java applications by breaking up the monolithic initialization of application state. Other goals include enabling user code to safely enjoy constant-folding optimizations previously available only to JDK code; guaranteeing that stable values are initialized at most once, even in multi-threaded programs; and decoupling the creation of stable values from their initialization, without significant performance penalties.

Removal of the 32-bit x86 port involves removing both the source code and build support for this port, which was deprecated for removal in JDK 24. The cost of maintaining this port outweighs the benefits, the proposal states. Keeping parity with new features, such as the foreign function and memory API, is a major opportunity cost. Removing the 32-bit x86 port will allow OpenJDK developers to accelerate the development of new features and enhancements.

Separate from the official feature list, JDK 25 also brings performance improvements to the class String, by allowing the String::hashCode function to take advantage of a compiler optimization called constant folding. Developers who use strings as keys in a static unmodifiable Map should see significant performance boosts, according to a May 1 article on Oracle’s Inside Java website.

(image/jpeg; 0.13 MB)

Spring Java creator unveils AI agent framework for the JVM 7 Jun 2025, 1:26 am

Embabel, an open source framework for authoring AI agentic flows on the JVM, has been launched by Spring Framework founder Rod Johnson. Johnson aims for Embabel to become the natural way to integrate generative AI into Java applications, especially those built on the Spring Framework.

In a web post on May 22, Johnson described Embabel as a new programming model for authoring agentic flows on the JVM that seamlessly mix LLM-prompted interactions with code and domain models. Embabel is intended not just to play catch-up with Python agent frameworks, but surpass them. “Much of the critical business logic in the world is running on the JVM, and for good reason. Gen AI enabling it is of critical importance.” 

Embabel is written in Kotlin and offers a natural usage model from Java, Johnson said. But once Embabel is established, plans call for creating TypeScript and Python projects based on the Embabel model.

Along with close Spring integration, Johnson cited these distinguishing features of Embabel:

  • Embabel introduces a planning step. The framework discovers actions and goals from application code, and plans toward the most appropriate goal given user or other system input. Planning is accomplished via a non-LLM AI algorithm that provides a deterministic and explainable approach to planning.
  • Embabel encourages building a rich domain model in an application, typically Kotlin data classes or Java records. This ensures that prompts are type-safe, tool-able, and survive refactoring. It also allows behavior to be added to domain objects, which can be exposed as LLM tools as well as used in code.

Johnson said that while Embabel embraces the Model Context Protocol, a higher-level orchestration technology is needed. The reasons include the need for explainability, discoverability, the ability to mix models, the ability to inject guardrails at any point in a flow, the ability to manage flow execution, composability of flows at scale, and safer integration with existing systems such as databases, where it is dangerous to allow write access to LLMs, Johnson noted.

“It’s early, but we have big plans,” said Johnson. “We want not just to build the best agent platform on the JVM, but to build the best agent platform, period.”

(image/jpeg; 0.47 MB)

Decentralized mesh cloud: A promising concept 6 Jun 2025, 11:00 am

Cloud computing and cloud infrastructure systems are evolving at an unprecedented rate in response to the growing demands of AI tasks that test the limits of resilience and scalability. The emergence of decentralized mesh hyperscalers is an innovation that dynamically distributes workloads across a network of nodes, bringing computing closer to the data source and enhancing efficiency by reducing latency.

A decentralized mesh hyperscaler is a distributed computing architecture in which multiple devices, known as nodes, connect directly, dynamically, and non-hierarchically to each other. Each node sends and receives data to collaboratively process and share resources without the need for a central server. This architectural choice creates a resilient, self-healing network that allows information or workloads to flow along multiple paths, providing high availability, scalability, and fault tolerance. Mesh computing is commonly used in Internet of Things networks, wireless communication, and edge computing scenarios, enabling efficient data exchange and task distribution across a wide range of interconnected devices.

Decentralized mesh computing sounds promising; however, it’s essential to evaluate this model from an implementation standpoint, especially when weighing the trade-offs between complexity and performance. In some scenarios, opting for cloud services from a specific region rather than a network of distributed areas or points of presence (depending on business requirements) may still be the most effective choice. Let’s explore why.

The allure of large-scale mesh computing

Companies involved in AI development find decentralized mesh hyperscalers intriguing. Traditional cloud infrastructure can struggle with modern machine learning workloads. Centralized data centers may face issues with latency and overload, making it challenging to meet the redundancy requirements of time-sensitive AI operations.

Many large tech companies are working to improve efficiency by distributing data processing across various points in the network rather than centralizing it in one location. This helps avoid traffic jams and wasted resources, potentially leading to an eco-friendly cloud system. One example could be a self-driving car company that must handle a large amount of live vehicle data. By utilizing mesh computing technology to process data at its source of generation, latency is reduced and system response times are enhanced, with the potential to improve the overall user experience.

Large tech companies also aim to address inefficiencies in resources by dispersing workloads across infrastructure.  Organizations can then access computing power on demand without the delays and expenses associated with cloud setups that rely on reserved capacity.

The challenge of complexity

Decentralized mesh hyperscalers seem promising; however, their practical implementation can bring added complexity to businesses. Managing workloads across regions and even minor points of presence necessitates the adoption of consumption models that are anything but straightforward.

When businesses use mesh hyperscalers to deploy applications across nodes in a distributed setup, smooth coordination among these nodes is crucial to ensuring optimal performance and functionality. Processing data close to its source presents challenges related to synchronization and consistency. Applications running on nodes must collaborate seamlessly; otherwise, this can lead to inefficiencies and delays, which will undermine the touted benefits of mesh computing.

Additionally, managing workloads in a distributed model can decrease performance. For example, during processing periods or within transactional setups, workloads may need to travel greater distances across dispersed nodes. In these cases, latency increases, especially when neighboring nodes are overloaded or not functioning optimally.

Issues like data duplication, increased storage requirements, and compliance concerns require careful handling. Companies must assess whether the flexibility and robustness of a mesh network genuinely outweigh the challenges and potential performance declines that stem from the additional challenge of managing nodes across various locations.

Balancing act in computing efficiency

Companies may find it better to rely on a cloud setup in a single location rather than opting for a scattered, decentralized structure. The traditional single-region approach provides simplicity, consistency, and established performance standards. All your resources and workloads operate within one controlled data center. You don’t have to worry about coordinating between nodes or managing multiregional latency.

When it comes to tasks that don’t require processing, such as batch data handling or stable AI workflows, setting up in a single region can provide faster and more reliable performance. Keeping all tasks within one area reduces the time needed for data transfer and decreases the likelihood of errors occurring. Centralized structures also help organizations maintain control over data residency and compliance regulations. Companies don’t have to figure out the varying rules and frameworks of different regions. When it comes to applications, especially those that don’t rely on immediate, large-scale processing, deployment in a single region typically remains a cost-effective and efficient performance choice.

Finding equilibrium

Decentralized mesh architectures offer a promising opportunity in the fields of AI and advanced technologies. Organizations must carefully assess the benefits and drawbacks. It’s essential to consider not just the novelty of the technology but also how well it aligns with an organization’s specific operational needs and strategic goals.

In certain situations, distributing tasks and processing data locally will undoubtedly enhance performance for demanding applications. However, some businesses may find that sticking to cloud setups within a single region offers operational ease, predictability, improved performance, and compliance adherence.

As cloud computing evolves, success lies in striking a balance between innovation and simplicity. Decentralized mesh hyperscalers represent progress—this is a fact beyond dispute. They also require a level of sophistication and understanding that not every organization has.

In situations where cost is a concern, mesh hyperscalers can reduce expenses by utilizing underused resources; however, they also introduce added operational complexities and require a learning curve to navigate effectively. For businesses that do not necessarily need the flexibility or robustness provided by a distributed system, choosing an alternative approach may still be the more cost-effective option.

(image/jpeg; 6.68 MB)

JavaScript innovation and the culture of programming 6 Jun 2025, 11:00 am

Something surprising about software is its vibrant and enduring culture. Far from being a sidebar to engineering, how programmers collaborate and learn from each other is often the heart of the story. JavaScript is, of course, one of the best examples of that phenomenon, with a culture that is at once inclusive, challenging, and rewarding.

This month, we have an example of software development culture in action with the release of Angular 20 at the Google I/O conference. But the culture of programming is everywhere, infused in the way developers work and build, and especially in the open source projects we use and contribute to.

That cultural resilience is a good thing, too, because I can’t be the only one finding that the more I use AI, the more I appreciate the human element of programming—the spirit, if you will. So, here’s to continuing to evolve the culture of software development while using JavaScript in ways that empower our vital human connections.

Top picks for JavaScript readers on InfoWorld

Putting agentic AI to work in Firebase Studio
Agentic AI is the next frontier for coding, and Google’s Firebase Studio is currently one of the most advanced platforms available. Get a first look at where Firebase Studio shines, and where it’s still a little rusty.

JavaScript promises: 4 gotchas and how to avoid them
You’ve learned the basics of JavaScript promises and how to use async/await to simplify asynchronous code. Now learn four ways promises can trick you, and how to resolve them.

8 ways to do more with modern JavaScript
Mastering a language is an ongoing practice driven by equal parts necessity and curiosity. From lambdas to promises, and from paradigms to AI, these features and techniques will get your train of thought rolling.

How to gracefully migrate your JavaScript programs to TypeScript
JavaScript and TypeScript are like binary stars revolving around each other; as a developer, you benefit from understanding both. Here’s a no-nonsense, practical guide to transforming your existing JavaScript programs to TypeScript.

More good reads and JavaScript updates elsewhere

Wake up, Remix!
Remix is being reimagined as a full-stack framework and toolset forked from Preact. Here, the Remix team explains the thinking behind turning Remix into an all-in-one, integrated approach to web development.

Your React meta-framework feels broken, here’s why
Redwood.js is an alternative full-stack meta-framework that aims to simplify JavaScript development by avoiding over-abstraction. Here’s the scoop on why Redwood aims to be different, directly from the framework’s designers.

Video: The 3 ways JavaScript frameworks render the DOM
Solid.js’s Ryan Carniato reviews the fundamental rendering approaches used by JavaScript web frameworks. Turns out, there are only three basic approaches.

(image/jpeg; 0.25 MB)

Adobe adds Product Support Agent for AI-assisted troubleshooting 5 Jun 2025, 11:41 pm

Expanding its planned suite of AI agents, Adobe introduced the new Product Support Agent, intended to simplify troubleshooting and support case management in the Adobe Experience Platform for managing customer experiences.

Announced June 4 and powered by the now-available  Adobe Experience Platform Agent Orchestrator, the Product Support Agent is intended to lighten operational troubleshooting by providing in-the-moment guidance and case management within the AI Assistant conversational interface. When a user asks for help with creating a support ticket, the new Product Support Agent gathers relevant contextual data from logs, metadata, user session data, and other sources, to pre-fill the support case. The user can view and approve the case before submitting it.

As part of its expansion of AI agents, Adobe has also announced the general worldwide availability of its Data Insights Agent. Built on Adobe Experience Platform Agent Orchestrator, the Data Insights Agent allows users to query data directly using natural-language questions such as “What channels drove the most conversations last week.” The agent then builds and delivers a visualization in the Analysis Workspace with Adobe Customer Journey Analysis. Adobe has also announced upcoming agents to support account qualification, data engineering, site optimization, and workflow optimization.

(image/jpeg; 7.7 MB)

Snowflake: Latest news and insights 5 Jun 2025, 7:24 pm

Snowflake (NYSE:SNOW) has rapidly become a staple for data professionals and has arguably changed how cloud developers, data managers and data scientists interact with data. Its architecture is designed to decouple storage and compute, allowing organizations to scale resources independently to optimize costs and performance.

For cloud developers, Snowflake’s platform is built to be scalable and secure, allowing them to build data-intensive applications without needing to manage underlying infrastructure. Data managers benefit from its data-sharing capabilities, which are designed to break down traditional data silos and enable secure, real-time collaboration across departments and with partners.

Data scientists have gravitated to Snowflake’s capability to handle large, diverse datasets and its integration with machine learning tools. Snowflake is designed to rapidly prepare raw data, build, train, and deploy models directly within the platform to achieve actionable insights.

Watch this page for the latest on Snowflake.

Snowflake latest news and analysis

Snowflake customers must choose between performance and flexibility

June 4, 2025: Snowflake is boosting the performance of its data warehouses and introducing a new adaptive technology to help enterprises optimize compute costs. Adaptive Warehouses, built atop Snowflake’s Adaptive Compute, is designed to lower the burden of compute resource management by maximizing efficiency through resource sizing and sharing,

Snowflake takes aim at legacy data workloads with SnowConvert AI migration tools

June 3, 2025: Snowflake is hoping to win business with a new tool for migrating old workloads. SnowConvert AI is designed to help enterprises move their data, data warehouses, business intelligence (BI) reports, and code to its platform without increasing complexity.

Snowflake launches Openflow to tackle AI-era data ingestion challenges

June 3, 2025: Snowflake introduced a multi-modal data ingestion service — Openflow — designed to help enterprises solve challenges around data integration and engineering in the wake of demand for generative AI and agentic AI use cases.

Snowflake acquires Crunchy Data to counter Databricks’ Neon buy

June 3, 2025: Snowflake plans to buy Crunchy Data,a cloud-based PostgreSQL database provider, for an undisclosed sum. The move is an effort to offer developers an easier way to build AI-based applications by offering a PostgreSQL database in its AI Data Cloud. The deal, according to the Everest Group ,is an answer to rival Databricks’ acquisition of open source serverless Postgres company Neon.

Snowflake’s Cortex AISQL aims to simplify unstructured data analysis

June 3, 2025: Snowflake is adding generative AI-powered SQL functions to help organizations analyze unstructured data with SQL. The new AISQL functions will be part of Snowflake’s Cortex, managed service inside its Data Cloud providing the building blocks for using LLMs without the need to manage complex GPU-based infrastructure.

Snowflake announces preview of Cortex Agent APIs to power enterprise data intelligence

February 12, 2025: Snowflake announced the public preview of Cortex Agents, a set of APIs built on top of the Snowflake Intelligence platform, a low-code offering that was first launched in November at Build, the company’s annual developer conference.

(image/jpeg; 0.55 MB)

Workday’s new dev tools help enterprises connect with external agents 5 Jun 2025, 6:29 pm

Workday (Nasdaq:WDAY) has presented new developer tools to help enterprises connect its HR and finance software with external agents.

The tools are an extension of the company’s Illuminate agentic AI platform, and include the Agent Gateway, AI Widgets, and expanded AI Gateway APIs.

Illuminate, rolled out in September 2024, is intended to accelerate common tasks such as writing knowledge-based text, job descriptions, or contracts, providing real-time AI assistance within workflows, and making a “team” of AI experts available to users. The models behind it are powered by the 800 billion business, HR, and financial transactions made by Workday customers annually.

AI Gateway targets governance of multi-agent applications

While some of the new tools are already available, even early adopters will have to wait until nearer the end of 2025 to get their hands on the new Agent Gateway.  

The gateway will enable enterprises to connect their Workday agents with external agents and create advanced multi-agent applications across software platforms, Workday said

It works with Workday’s Agent System of Record (ASOR) — a layer inside its proprietary central platform intended at helping enterprises manage Workday agents and third-party agents in one place, while providing tools for governing, managing, and optimizing them. The ASOR, in turn, makes use of shared protocols, such as Anthropic’s Model Context Protocol (MCP) and Google’s Agent-to-Agent Protocol (A2A), to collaborate with external agents.

For now, Agent Gateway and ASOR only support external agents that are built on vendor platforms in Workday’s Agent Partner Network. Currently, these partners include the likes of Accenture, Adobe, Amazon Web Services (AWS), Auditoria.AI, Compa, Deloitte, Glean, Google Cloud, IBM, Kainos, KPMG, Microsoft, Paradox, PwC, and WorkBoardAI.

The Agent Gateway, according to Abhigyan Malik, practice director at Everest Group, is Workday’s way of providing a mechanism that allows for governed, predictable orchestration of AI agents, not only within its core applications but also in conjunction with external systems.            

Orchestration and governance is critical for enterprises working with advanced agentic applications across a heterogeneous technology stack as it allows them to have control agents in terms of their access to data, which is particularly relevant in HR and financial contexts, Malik said.

Workday Marketplace already includes a selection of agents developed by Workday itself, or on the platforms of vendors in its Agent Partner Network, that can be used with Agent Gateway.

More AI-based guidance for in-house enterprise applications

Workday also announced AI Widgets that enterprise developers can use to build more AI-based guidance into their in-house applications.They will be able to personalize the AI in these widgets with custom prompts for a particular team or even a single user, it said.

The company is expanding its AI Gateway APIs to help developers integrate Workday AI services natively inside applications. These services could range from natural language conversation ability to leveraging document intelligence in applications, it said.

And it is adding more capabilities to Developer Copilot, the conversational AI coding companion that is part of its Extend platform for building custom applications on Workday data. The additional capabilities of Developer Copilot includes generating application code snippets, generating data queries, helping find the right APIs for a particular use case, and generating functional orchestrations along with documentation.

The new Workday AI Services, AI Widgets, and Developer Copilot capabilities are currently available with Workday Extend Professional.

Workday is also working on a new command-line interface tool that it intends to make generally available by year-end. Workday Developer CLI will help automate development tasks, collaborate more effectively, and integrate Workday into DevOps workflows, the company said.

(image/jpeg; 0.18 MB)

How to test your Java applications with JUnit 5 5 Jun 2025, 11:00 am

JUnit 5 is the de facto standard for developing unit tests in Java. A robust test suite not only gives you confidence in the behavior of your applications as you develop them, but also gives you confidence that you won’t inadvertently break things when you make changes in the future.

This article gets you started with testing your Java applications using JUnit 5. You’ll learn:

  1. How to configure a Maven project to use JUnit 5.
  2. How to write tests using the @Test and @ParameterizedTest annotations.
  3. How to validate test results using JUnit 5’s built-in assertion functionality.
  4. How to work with the lifecycle annotations in JUnit 5.
  5. How to use JUnit tags to control which test cases are executed.

Before we do all that, let’s take a minute to talk about test-driven development.

What is test-driven development?

If you are developing Java code, you’re probably intimately familiar with test-driven development, so I’ll keep this section brief. It’s important to understand why we write unit tests, however, as well as the strategies developers employ when designing them.

Test-driven development (TDD) is a software development process that interweaves coding, testing, and design. It’s a test-first approach that aims to improve the quality of your applications. Test-driven development is defined by the following lifecycle:

  1. Add a test.
  2. Run all your tests and observe the new test failing.
  3. Implement the code.
  4. Run all your tests and observe the new test succeeding.
  5. Refactor the code.

Here’s a visual overview of the TDD lifecycle.

A diagram of the described test-driven development lifecycle.

Steven Haines

There’s a twofold purpose to writing tests before writing your code. First, it forces you to think about the business problem you are trying to solve. For example, how should successful scenarios behave? What conditions should fail? How should they fail? Second, testing first gives you more confidence in your tests. Whenever I write tests after writing code, I have to break them to ensure they’re actually catching errors. Writing tests first avoids this extra step.

Writing tests for the happy path is usually easy: Given good input, the class should return a deterministic response. But writing negative (or failure) test cases, especially for complex components, can be more complicated.

As an example, consider a test for calling a database repository class. On the happy path, we insert a record into the database and receive back the created object, including any generated keys. In reality, we must also consider the possibility of a conflict, such as inserting a record with a unique column value that is already held by another record. Additionally, what happens when the repository can’t connect to the database, perhaps because the username or password has changed? Or if there’s a network error in transit? What happens if the request doesn’t complete in your defined timeout limit?

To build a robust component, you need to consider all likely and unlikely scenarios, develop tests for them, and write your code to satisfy those tests. Later in the article, we’ll look at strategies for creating failure scenarios, along with some of the features in JUnit 5 that can help you test them.

Unit testing with JUnit 5

Let’s start with an example of how to configure a project to use JUnit 5 for a unit test. Listing 1 shows a MathTools class whose method converts a numerator and denominator to a double.

Listing 1. An example JUnit 5 project (MathTools.java)


package com.javaworld.geekcap.math;

public class MathTools {
    public static double convertToDecimal(int numerator, int denominator) {
        if (denominator == 0) {
            throw new IllegalArgumentException("Denominator must not be 0");
        }
        return (double)numerator / (double)denominator;
    }
}

We have two primary scenarios for testing the MathTools class and its method:

  • A valid test, in which we pass a non-zero integer for the denominator.
  • A failure scenario, in which we pass a zero value for the denominator.

Listing 2 shows a JUnit 5 test class to test these two scenarios.

Listing 2. A JUnit 5 test class (MathToolsTest.java)


package com.javaworld.geekcap.math;

import java.lang.IllegalArgumentException;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;

class MathToolsTest {
    @Test
    void testConvertToDecimalSuccess() {
        double result = MathTools.convertToDecimal(3, 4);
        Assertions.assertEquals(0.75, result);
    }

    @Test
    void testConvertToDecimalInvalidDenominator() {
        Assertions.assertThrows(IllegalArgumentException.class, () -> MathTools.convertToDecimal(3, 0));
    }
}

In Listing 2, the testConvertToDecimalInvalidDenominator method executes the MathTools::convertToDecimal method inside an assertThrows call. The first argument is the expected type of the exception to be thrown. The second argument is a function that will throw that exception. The assertThrows method executes the function and validates that the expected type of exception is thrown.

The Assertions class and its methods

The org.junit.jupiter.api.Test annotation denotes a test method. The testConvertToDecimalSuccess method first executes the MathTools::convertToDecimal method with a numerator of 3 and a denominator of 4, then asserts that the result is equal to 0.75. The org.junit.jupiter.api.Assertions class provides a set of static methods for comparing actual and expected results. Methods in the JUnit Assertions class cover most of the primitive data types:

  • assertArrayEquals: Compares the contents of an actual array to an expected array.
  • assertEquals: Compares an actual value to an expected value.
  • assertNotEquals: Compares two values to validate that they are not equal.
  • assertTrue: Validates that the provided value is true.
  • assertFalse: Validates that the provided value is false.
  • assertLinesMatch: Compares two lists of Strings.
  • assertNull: Validates that the provided value is null.
  • assertNotNull: Validates that the provided value is not null.
  • assertSame: Validates that two values reference the same object.
  • assertNotSame: Validates that two values do not reference the same object.
  • assertThrows: Validates that the execution of a method throws an expected exception. (You can see this in the testConvertToDecimalInvalidDenominator example above.)
  • assertTimeout: Validates that a supplied function completes within a specified timeout.
  • assertTimeoutPreemptively: Validates that a supplied function completes within a specified timeout, but once the timeout is reached, it kills the function’s execution.

If any of these assertion methods fails, the unit test is marked as failed. That failure notice will be written to the screen when you run the test, then saved in a report file.

Using delta with assertEquals

When using float and double values in an assertEquals, you can also specify a delta that represents a threshold of difference between the two values being compared. For example, 22/7 is often used as an approximation of PI, or 3.14, but if we divide 22 by 7, we do not get 3.14. Instead, we get 3.14285.

Listing 3 shows how to use a delta value to validate that 22/7 returns a value between 3.141 and 3.143.

Listing 3. Testing assertEquals with a delta


@Test
void testConvertToDecimalWithDeltaSuccess () {
    double result = MathTools.convertToDecimal(22, 7);
    Assertions.assertEquals(3.142, result, 0.001);
}

In this example, we expect 3.142 +/- 0.001, which matches all values between 3.141 and 3.143. Both 3.140 and 3.144 would fail the test, but 3.142857 passes.

Analyzing your test results

In addition to validating a value or behavior, the assert methods can also accept a textual description of the error, which can help you diagnose failures. Consider the two variations in the following output:


Assertions.assertEquals(0.75, result, "The MathTools::convertToDecimal value did not return the correct value of 0.75 for 3/4");

Assertions.assertEquals(0.75, result, () -> "The MathTools::convertToDecimal value did not return the correct value of 0.75 for 3/4");

The output shows the expected value of 0.75 as well as the actual value. It also displays the specified message, which can help you understand the context of the error. The difference between the two variations is that the first one always creates the message, even if it is not displayed, whereas the second one only constructs the message if the assertion fails. In this case, the construction of the message is trivial, so it doesn’t really matter. Still, there is no need to construct an error message for a test that passes, so it’s usually a best practice to use the second style.

Finally, if you’re using a Java IDE like IntelliJ to run your tests, each test method will be displayed by its method name. This is fine if your method names are readable, but you can also add a @DisplayName annotation to your test methods to better identify the tests:


@Test
@DisplayName("Test successful decimal conversion")
void testConvertToDecimalSuccess() {
  double result = MathTools.convertToDecimal(3, 4);
  Assertions.assertEquals(0.751, result);
}

Running your unit tests with Maven

To run JUnit 5 tests from a Maven project, you need to include the maven-surefire-plugin in the Maven pom.xml file and add a new dependency. Listing 4 shows the pom.xml file for this project.

Listing 4. Maven pom.xml for an example JUnit 5 project




    4.0.0

    org.example
    JUnitExample
    1.0-SNAPSHOT

    
        24
        24
        UTF-8
    

    
        
            
                org.apache.maven.plugins
                maven-surefire-plugin
                3.5.3
            
        
    

    
        
            org.junit.jupiter
            junit-jupiter
            5.12.2
            test
        
    



JUnit 5 packages its components in the org.junit.jupiter group and uses the junit-jupiter aggregator artifact to import dependencies. Adding junit-jupiter imports the following dependencies:

  • junit-jupiter-api: Defines the API for writing tests and extensions.
  • junit-jupiter-engine: The test engine implementation that runs the unit tests.
  • junit-jupiter-params: Supports parameterized tests.

Next, we add the Maven build plugin, maven-surefire-plugin, to run the tests.

Finally, we target our build to Java 24, using the maven.compiler.source and maven.compiler.target properties.

Run the test class

Now we’re ready to run our test class. You can use the following command to run the test class from your IDE or Maven:


mvn clean test

If you’re successful, you should see something like the following:


[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] Running com.javaworld.geekcap.math.MathToolsTest
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.04 s - in com.javaworld.geekcap.math.MathToolsTest
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  3.832 s
[INFO] Finished at: 2025-05-21T08:21:15-05:00
[INFO] ------------------------------------------------------------------------

Parameterized tests in JUnit 5

You’ve seen how to write and run a basic JUnit 5 unit test, so now let’s go a bit further. The test class in this section is also based on the MathTools class, but we’ll use parameterized tests to more thoroughly test our code.

To start, I’ve added another method, isEven, to the MathTools class:


public static boolean isEven(int number) {
  return number % 2 == 0;
}

We could test this code the same way we did in the previous section, by passing different numbers to the isEven method and validating the response:


@Test
void testIsEvenSuccessful() {
  Assertions.assertTrue(MathTools.isEven(2));
  Assertions.assertFalse(MathTools.isEven(1));
}

The methodology works, but if we want to test a large number of values, it will soon become cumbersome to enter the values manually. In this case, we can use a parameterized test to specify the values that we want to test:


@ParameterizedTest
@ValueSource(ints = {0, 2, 4, 6, 8, 10, 100, 1000})
void testIsEven(int number) {
  Assertions.assertTrue(MathTools.isEven(number));
}

For this test, we use the @ParameterizedTest annotation instead of the @Test annotation. We also have to provide a source for the parameters.

Using sources in parameterized testing

There are different types of sources, but the simplest is the @ValueSource, which lets us specify a list of Integers or Strings. The parameter is passed as an argument to the test method and then can be used in the test. In this case, we’re passing in eight even integers and validating that the MathTools::isEven method properly identifies them as even.

This is better, but we still have to enter all the values we want to test. What would happen if we wanted to test all the even numbers between 0 and 1,000? Rather than manually entering all 500 values, we could replace our @ValueSource with @MethodSource, which generates the list of numbers for us. Here’s an example:


@ParameterizedTest
@MethodSource("generateEvenNumbers")
void testIsEvenRange(int number) {
  Assertions.assertTrue(MathTools.isEven(number));
}

static IntStream generateEvenNumbers() {
  return IntStream.iterate(0, i -> i + 2).limit(500);
}

When using a @MethodSource, we define a static method that returns a stream or collection. Each value will be sent to our test method as a method argument. In this example, we create an IntStream, which is a stream of integers. The IntStream starts at 0, increments by twos, and limits the total number of items in the stream to 500. This means the isEven method will be called 500 times, with all even numbers between 0 and 998.

Parameterized tests include support for the following types of sources:

  • ValueSource: Specifies a hardcoded list of integers or Strings.
  • MethodSource: Invokes a static method that generates a stream or collection of items.
  • EnumSource: Specifies an enum, whose values will be passed to the test method. It allows you to iterate over all enum values or include or exclude specific enum values.
  • CsvSource: Specifies a comma-separated list of values.
  • CsvFileSource: Specifies a path to a comma-separated value file with test data.
  • ArgumentsSource: Allows you to specify a class that implements the ArgumentsProvider interface, which generates a stream of arguments to be passed to your test method.
  • NullSource: Passes null to your test method if you are working with Strings, collections, or arrays. You can include this annotation with other annotations, such as ValueSource, to test a collection of values and null.
  • EmptySource: Includes an empty value if you are working with Strings, collections, or arrays.
  • NullAndEmptySource: Includes both null and an empty value if you are working with Strings, collections, or arrays.
  • FieldSource: Allows you to refer to one or more fields of the test class or external classes.
  • Multiple sources: JUnit allows you to use multiple “repeatable” sources by specifying multiple source annotations on your parameterized test method. Repeatable sources include: ValueSource, EnumSource, MethodSource, FieldSource, CsvSource, CsvFileSource, and ArgumentsSource.

JUnit 5’s test lifecycle

For many tests, there are things that you might want to do before and after each of your test runs and before and after all of your tests run. For example, if you were testing database queries, you might want to set up a connection to a database and import a schema before all the tests run, insert test data before each individual test runs, clean up the database after each test runs, and then delete the schema and close the database connection after all the tests run.

JUnit 5 provides the following annotations that you can add to methods in your test class for these purposes:

  • @BeforeAll: A static method in your test class that is called before all its tests run.
  • @AfterAll: A static method in your test class that is called after all its tests run.
  • @BeforeEach: A method that is called before each individual test runs.
  • @AfterEach: A method that is called after each individual test runs.

Listing 5 shows a very simple example that logs the invocations of the various lifecycle methods.

Listing 5. Logging the invocations of JUnit 5 lifecycle methods (LifecycleDemoTest.java)


package com.javaworld.geekcap.lifecycle;

import org.junit.jupiter.api.*;

public class LifecycleDemoTest {

    @BeforeAll
    static void beforeAll() {
        System.out.println("Connect to the database");
    }

    @BeforeEach
    void beforeEach() {
        System.out.println("Load the schema");
    }

    @AfterEach
    void afterEach() {
        System.out.println("Drop the schema");
    }

    @AfterAll
    static void afterAll() {
        System.out.println("Disconnect from the database");
    }

    @Test
    void testOne() {
        System.out.println("Test One");
    }

    @Test
    void testTwo() {
        System.out.println("Test Two");
    }
}

The output from running this test prints the following:


Connect to the database
Load the schema
Test One
Drop the schema
Load the schema
Test Two
Drop the schema
Disconnect from the database

As you can see from this output, the beforeAll method is called first and may do something like connect to a database or create a large data structure into memory. Next, the beforeEach method prepares the data for each test; for example, by populating a test database with an expected set of data. The first test then runs, followed by the afterEach method. This process (beforeEach—> test—>afterEach) continues until all the tests have completed. Finally, the afterAll method cleans up the test environment, possibly by disconnecting from a database.

Tags and filtering in JUnit 5

Before wrapping up this initial introduction to testing with JUnit 5, I’ll show you how to use tags to selectively run different kinds of test cases. Tags are used to identify and filter specific tests that you want to run in various scenarios. For example, you might tag one test class or method as an integration test and another as development code. The names and uses of the tags are all up to you.

We’ll create three new test classes and tag two of them as development and one as integration, presumably to differentiate between tests you want to run when building for different environments. Listings 6, 7, and 8 show these three simple tests.

Listing 6. JUnit 5 tags, test 1 (TestOne.java)


package com.javaworld.geekcap.tags;

import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;

@Tag("Development")
class TestOne {
    @Test
    void testOne() {
        System.out.println("Test 1");
    }
}

Listing 7. JUnit 5 tags, test 2 (TestTwo.java)


package com.javaworld.geekcap.tags;

import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;

@Tag("Development")
class TestTwo {
    @Test
    void testTwo() {
        System.out.println("Test 2");
    }
}

Listing 8. JUnit 5 tags, test 3 (TestThree.java)


package com.javaworld.geekcap.tags;

import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;

@Tag("Integration")
class TestThree {
    @Test
    void testThree() {
        System.out.println("Test 3");
    }
}

Tags are implemented through annotations, and you can annotate either an entire test class or individual methods in a test class; furthermore, a class or a method can have multiple tags. In this example, TestOne and TestTwo are annotated with the “Development” tag, and TestThree is annotated with the “Integration” tag. We can filter test runs in different ways based on tags. The simplest of these is to specify a test in your Maven command line; for example, the following only executes tests tagged as “Development”:


mvn clean test -Dgroups="Development"

The groups property allows you to specify a comma-separated list of tag names for the tests that you want JUnit 5 to run. Executing this yields the following output:


[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] Running com.javaworld.geekcap.tags.TestOne
Test 1
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 s - in com.javaworld.geekcap.tags.TestOne
[INFO] Running com.javaworld.geekcap.tags.TestTwo
Test 2
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 s - in com.javaworld.geekcap.tags.TestTwo

Likewise, we could execute just the integration tests as follows:


mvn clean test -Dgroups="Integration"

Or, we could execute both development and integration tests:


mvn clean test -Dgroups="Development, Integration"

In addition to the groups property, JUnit 5 allows you to use an excludedGroups property to execute all tests that do not have the specified tag. For example, in a development environment, we do not want to execute the integration tests, so we could execute the following:


mvn clean test -DexcludedGroups="Integration"

This is helpful because a large application can have literally thousands of tests. If you wanted to create this environmental differentiation and add some new production tests, you would not want to have to go back and add a “Development” tag to the other 10,000 tests.

Finally, you can add these same groups and excludedGroups fields to the surefire plugin in your Maven POM file. You can also control these fields using Maven profiles. I encourage you to review the JUnit 5 user guide to learn more about tags.

Conclusion

This article introduced some of the highlights of working with JUnit 5. I showed you how to configure a Maven project to use JUnit 5 and how to write tests using the @Test and @ParameterizedTest annotations. I then introduced the JUnit 5 lifecycle annotations, followed by a look at the use and benefits of filter tags.

(image/jpeg; 24.29 MB)

Automating devops with Azure SRE Agent 5 Jun 2025, 11:00 am

Modern AI tools have been around for some time now, offering a wide variety of different services. Much of what’s been delivered so far has only touched the surface of what’s possible; voice recognition and transcription tools are probably the most useful.

The development of agentic AI has changed things. Instead of using AI to simply generate text, we can use a natural language user interface to use context to construct workflows and manage operations. Support for OpenAPI interfaces allows us to go from user intent to actual service calls, with newer technologies such as the Model Context Protocol (MCP) providing defined interfaces to applications.

Building agents into workflows

Agentic workflows don’t even need to be generated by humans; they can be triggered by events, providing completely automated operations that pop up as natural language in command lines and in tools like Teams. Microsoft’s Adaptive Cards, originally intended to surface elements of microwork inside Teams chats, are an ideal tool for triggering steps in a long workflow managed by an agent but putting a human inside the loop, responding to reports and triggering next steps.

Using agents to handle exceptions in business processes is a logical next step in the evolution of AI applications. We can avoid many of the issues that come with chatbots, as agents are grounded in a known state and use AI tools to orchestrate responses to a closed knowledge domain. A system prompt automatically escalates to a human when unexpected events occur.

Adding agents to site reliability engineering

One domain where this approach should be successful is in automating some of the lower-level functions of site reliability engineering (SRE), such as restarting servers, rotating certificates, managing configurations, and the like. Here we have a known ground state, a working stack composed of infrastructure, platforms, and applications. Microsoft describes this as agentic devops.

With modern cloud-native applications, much of that stack is encoded in configuration files, even infrastructure, using tools like Azure Resource Manager or Terraform templates, or languages like Bicep and Pulumi. At a higher level, APIs can be described using TypeSpec, with Kubernetes platforms described in YAML files.

All these configuration files, often stored as build artifacts in standard repositories, give us a desired state we can build on. In fact, if we’re in a Windows Server environment we can use PowerShell’s Desired State Configuration definitions as a ground state.

Once a system is operating, logs and other metrics can be collected and collated by Azure’s monitoring tools and stored in a Fabric data lake where analytics tools using Kusto Query Language can query that data and generate reports and tables that allow us to analyze what’s happening across what can be very complex, distributed systems. Together these features are the back ends for SRE dashboards, helping engineers pinpoint issues and quickly deploy fixes before they affect users.

Azure agent tools for the rest of us

With its existing devops tools, Microsoft has many of the necessary pieces to build a set of SRE agents. Its existing agent tools and expanding roster of MCP servers can be mixed with this data and these services to provide alerts and even basic remediation. So, it wasn’t surprising to hear that Azure is already using something similar internally and will shortly be launching a preview of a public SRE agent. Microsoft has a long history of taking internal tools and turning them into pieces of its public-facing platform. Much of Azure is built on top of the systems Microsoft needed to build and run its cloud applications.

Announced at Build 2025, Azure SRE Agent is designed to help you manage production services using reasoning large language models (LLMs) to determine root causes and suggest fixes based on logs and other system metrics. The underlying approach is more akin to traditional machine learning tools: looking for exceptions and then comparing the current state of a system with best practices and with desired state configurations to get your system back up and running as quickly as possible—even before any users have been impacted.

The aim of Azure SRE Agent is to reduce the load on site reliability engineers, administrators, and developers, allowing them to concentrate on larger tasks without being distracted from their current flow. It’s designed to run in the background, using normal operations to fine-tune the underlying model to fit applications and their supporting infrastructure.

The resulting context model can be queried at any time, using natural language, much like using the Azure MCP Server in Visual Studio Code. As the system is grounded in your Azure resources, results will be based on data and logs, much like using a specific retrieval-augmented generation (RAG) AI tool but without the complexity that comes with building a vector index in real time. Instead, services like Fabric’s data agent provide an API that manages queries for you. There’s even the option to use such tools to visualize data, using markup to draw graphs and charts.

Making this type of agent event-driven is important, as it can be tied to services like Azure’s Security Graph. By using current Azure security policies as a best practice, it’s able to compare current state with what it should be, informing users of issues and performing basic remediations in line with Azure recommendations. For example, it can update web servers to a new version of TLS, ensuring that your applications remain online.

Events can be sourced from Azure tools like Monitor, pulling alert details to drive an automated root-cause analysis. As the agent is designed to work with known Azure data sources, it’s able to use these to detect exceptions and then determine the possible cause, reporting back its conclusions to a duty site reliability engineer. This gives the engineer not only an alert but a place to start investigations and remediations.

There is even the option of handling basic fixes once they are approved by a site reliability engineer. The list of approved operations is sensibly small, including triggering scaling, restarting, and where appropriate, rolling back changes.

Recording what has been discovered and what has been done is important. The root-cause analysis, the problem discovery, and any fixes will be written to the application’s GitHub as an issue so further investigations can be carried out and longer-term changes made. This is good devops practice, as it brings the developer back into the site reliability discussion, ensuring that everyone involved is informed and that devops teams can make plans to keep this and any other problem from happening again.

The agent produces daily reports focused on incidents and their status, overall resource group health, a list of possible actions to improve performance and health, and suggestions for possible proactive maintenance.

Getting started with Azure SRE Agent

The Azure SRE Agent is currently a gated public preview (you need to sign up to get access). Microsoft has made much of the documentation public so you can see how it works. Working with SRE Agent appears to be relatively simple, starting with creating an agent from the Azure portal and assigning it to an account and a resource group. You’ll need to choose a region to run the agent from, currently only Sweden Central, though it can monitor resource groups in any region.

For now, interacting with the agent takes place in the Azure Portal. Most site reliability engineers working with Azure will have the portal open most of the time—it’s not really the best place to have chats. Hopefully Microsoft will take advantage of Teams and Adaptive Cards to bring it closer to other parts of their ecosystem. Delivering reports as Power BI dashboards would also help the SRE Agent fit into a typical service operations center.

Tools like this are an important evolution of AI and generative AI in particular. They’re not chatbots, though you can chat with them. Instead, they’re grounded in real-time data and they use reasoning approaches to extract context and meaning from data, using that context to construct a workflow based on best practices and current and desired system states. Building the agent around a human-in-the-loop model, where human approval is required before any actions are taken, may slow things down but will increase trust in the early stages of a new way of working.

It’ll be interesting to see how this agent evolves, as Microsoft rolls out deeper Azure integration through its MCP servers and as languages like TypeSpec give us a way to add context to OpenAPI descriptions. Deeply grounded AI applications should go a long way to deliver on the Copilot promise, providing tools that make users’ tasks easier and don’t interrupt them. They will also show us the type of AI applications we should be building as the underlying platforms and practices mature.

(image/jpeg; 10.22 MB)

AI is powering enterprise development, GitHub says 5 Jun 2025, 12:17 am

AI is becoming increasingly important in enterprise software development, particularly with the rise of accompanying agentic models, GitHub’s Martin Woodward, vice president of developer relations, said during his talk at this week’s GitHub Galaxy online conference. He also said devops was a critical factor in AI usage.

In a presentation on software development and AI, titled “The AI-Accelerated Enterprise,” Woodward stressed that AI tools are being adopted in different ways. “AI has been and can be used across your entire application pipeline to kind of accelerate teams,” Woodward said. “But what we’re seeing is that the teams who are best placed with mature devops practices, they’re currently the ones who are best-positioned to take advantage of the power of AI, having processes and technology and guardrails in place to help you ship more quickly.”  

Referring to GitHub Copilot, Woodward noted that AI-powered programming tools are moving beyond autocomplete capabilities that help developers more quickly iterate through their code into the next phase of AI coding assistance. “We’re seeing the rise of agentic-based methods in software development and we’re also seeing AI-native ways of building software,” he said. Developers using Visual Studio Code on GitHub now have a huge range of models available to them, Woodward said.

Devops also is being refined based on agentic patterns, Woodward said. He presented a slide decreeing, “Teams will get better at devops with agents,” with the slide noting that agents help developers with tasks such as fixing bugs, targeted refactoring, doing code reviews, and performing dependency upgrades. Also presented by Woodward was a slide describing the software of the future, noting that future applications will be cloud-native, centered on customer experience, and intelligent, with AI being built into customer experiences.

Leading companies already are focused on customer experience, Woodward said.

(image/jpeg; 6.11 MB)

Snowflake customers must choose between performance and flexibility 4 Jun 2025, 5:51 pm

Snowflake (Nasdaq:SNOW) is boosting the performance of its standard data warehouses and introducing a new adaptive technology to help enterprises optimize compute costs — but customers will have to choose one or the other.

Adaptive Warehouses, built atop Snowflake’s Adaptive Compute, will lower the burden of compute resource management by maximizing efficiency through resource sizing and sharing, while the Gen 2 standard data warehouses will double analytics performance, the company said.

[ RelatedMore Snowflake news and insights ]

Robert Kramer, principal analyst at Moor Insights and Strategy, sees advantages in Snowflake’s Adaptive Warehouses. “This eliminates the need for customers to guess warehouse sizes or manually configure concurrency settings. This simplifies Snowflake management and potentially reduces costs, especially for teams without dedicated cloud administrators,” he said. The new warehouses can consolidate analytics tasks into a shared pool that automatically balances usage, and this setup reduces idle compute and adapts to shifting demands, ensuring SLAs are met without constant oversight, he said.

The automated resource management capability of Adaptive Warehouses, which is still in private preview, could also enable enterprises looking to drive a variety of AI and advanced analytics use cases to experiment and deliver applications for those use cases faster.

The Adaptive Compute technology underlying Adaptive Warehouses is not new, and its use here is a sign of the company’s shift towards elastic and serverless infrastructure, a strategy that hyperscalers such as AWS, Google, and Microsoft have also used to strengthen their enterprise offerings.

It also helps Snowflake stay competitive with vendors like Databricks, which already support automatic scaling, Kramer said.

Managed compute offerings not suited for all workloads

There are some practical constraints on the use of serverless databases or data warehouses that enterprises should consider, said Matt Aslett, director of advisory firm ISG.

“If an enterprise needs control over physical hardware to meet specific performance requirements, then serverless is not an appropriate option. Additionally, long-running workloads may not deliver the advertised benefits in terms of cost-savings compared to ephemeral workloads,” Aslett said.

“Generally, serverless databases are best suited to development and test environments as well as lightweight applications and workloads with sporadic usage,” Aslett explained, adding that he expects Snowflake to provide customers with best practice guidelines as Adaptive Warehouses move from private preview to general availability.

Snowflake’s head of core platform division Artin Avanes said switching from a standard virtual warehouse to an Adaptive Warehouse is “as simple as running an alter command with no downtime required.”

Avanes said the rationale behind enabling customers to switch data warehouses is that Snowflake customers often say that consolidating workloads can be disruptive and time consuming, especially when warehouse names are hard coded into pipelines and scripts.

“With Adaptive Warehouses, users can simply convert their production workloads in batches, while still maintaining existing warehouse names, policies, permissions, and chargeback reporting structure,” he said.

Gen 2 generally available

For those not ready to switch, Snowflake has also made the Gen 2 update to its virtual standard data warehouse platform generally available.

Gen 2 has upgraded hardware and software to effect performance enhancements — “2.1x faster analytics”, said Avanes.

Comparing Adaptive Warehouses to Gen 2, Constellation Research principal analyst Michael Ni described Gen2 as a high-performance engine and Adaptive Compute as the autopilot.

“Gen2 delivers raw speed—2x faster analytics and up to 4x faster DML—but Adaptive Compute is about automation. While they’re separate today, Adaptive Compute is designed to run on the best available hardware, which likely includes Gen2,” Ni added.

However, Ni said, Snowflake customers currently have to choose a specific data warehouse — either Adaptive or Gen 2 — and the two functionalities cannot be combined.

“For now, customers get performance today and automation tomorrow,” he said.

More Snowflake news:

(image/jpeg; 9.4 MB)

Naming is easy! A guide for developers 4 Jun 2025, 11:00 am

There is an old joke in programming circles: 

There are two hard things in programming: cache invalidation, naming things, and off-by-one errors.

The first is truly hard, the third is the joke, but the second one? That one baffles me. Naming things is easy, or at least it should be. But for reasons unknown, we make it hard. Just call it what it is or what it does. But for some reason, we developers have an aversion to doing that. Here are some thoughts about why naming is hard for developers, along with some ideas for doing it right.

Nine reasons for bad naming

  1. We assume everyone knows what we know
  2. Meaning drift
  3. Sheer laziness
  4. Too eager to abbreviate
  5. Forgetting functions are verbs
  6. Inconsistency
  7. Going negative
  8. Prefixing
  9. Blah words

We assume everyone knows what we know

This is the most common reason that we name things badly. I know that EmpNo stands for EmployeeNumber, and that EmployeeNumber is the unique ID in the database. Why wouldn’t everyone else? Well, because the next poor developer who comes along might think that the EmployeeNumber is the number assigned to the employee for logging in, and has nothing to do with unique values in the database. 

If EmployeeNumber is the unique ID in the database, why not just call it that? How about calling it EmployeeUniqueIDInDatabase? Sure, it’s a bit of a mouthful, but no one will ever mistake it for something else, right? 

And if you say to me, “Nick, that’s too much typing!” I’ll give you my standard response: Lazy is no way to go through life, my friend. Plus, these days, your IDE will do all of that typing for you. A long, clear, explanatory name is always superior to an abbreviation you think is obvious but really isn’t. 

Meaning drift

Sometimes the meaning of a name can be less precise than it might be, and that meaning can drift over time. You might start out with a method called SaveReceipt that puts a copy of the receipt in the database. But over time, you may add printing to the routine, and move the actual saving to a different method, and suddenly your name is lying to you. Naming it SaveReceiptToTheDatabase in the first place might make that harder to happen.

Sheer laziness

While naming things isn’t hard, it does take a bit of thought. I guess some folks just don’t want to take the time to think about naming things. I’m not even going to talk about how silly it is to use a single letter for a variable name. The only exception I’ll quarter is using i as the variable in a loop. (But I’ll argue vehemently that Index is better.)

Otherwise, give a variable a really good, full name. Sure, it may take some effort, but if you stop and ask, “What, exactly, is this thing?” and then name it based on what your answer is, you’ll have a great name. For instance, if you feel the need to do this:


If (EmployeeNumber > 0) and (OrderNumber >  0) {
 // ...
}

Don’t be afraid to go the extra mile:


EmployeeIsValid = EmployeeUniqueIDInDatabase > 0;
ThereIsAnOrder = OrderNumber > 0;
ItIsOkayToProcessTheOrder := EmployeeIsValid and ThereIsAnOrder;
If ItIsOkayToProcessTheOrder {
  // ...
}

That is massively more readable, and the variable names clearly explain what they represent. It would be very hard to confuse what is happening there, and it reduces the cognitive load of the next developer, who no longer has to parse complex boolean statements. 

Too eager to abbreviate

Laziness isn’t good, but neither is being in a hurry. Being in a hurry might cause us to abbreviate things when there is no need to do so. Remember, the IDE will do a lot of typing for you. You might think you’re saving .876 seconds by typing acctBlnc instead of accountBalance, but really you’re just stealing precious hours from the poor guy maintaining your code. 

And while we are at it, whose account balance is that? The company’s? The customer’s? Who knows?

Why developers are afraid of long names is a mystery to me. Don’t abbreviate anything unless it is an industry standard like URL or HTTP. Again, just type it out.

Forgetting functions are verbs

All methods should be named as verbs and should completely describe what they do. getCustomer is good, but where are you getting the customer from? What, exactly, are you getting? getCustomerInstanceFromDatabase is better. Again, if you ask yourself, “What is this function doing?” and then just name it based on your complete answer, you’ll have more maintainable code.

Inconsistency

It’s easy to be inconsistent. For instance, if you use the word Customer to mean a person standing in front of the point-of-sale system buying something, then make sure that is what you call them everywhere in the system. Don’t use the word Client to describe them, ever. Don’t call them Buyer in some other module. Use the same term for the same thing consistently, throughout your repository.

Going negative

As I mentioned a couple weeks ago, keep names positive, particularly Booleans (if you must use them). Names like isNotValid and denyAccess become abominations. For example:


if (!IsNotValid) or (!denyAccess) {
  // ...
} 

There is a reason we don’t use double negatives in the English language. You also should avoid them in your code.

Prefixing

Back in the day, Hungarian notation was all the rage. All names had a prefix that defined what they were in addition to the name. This has gone out of vogue, as it got complex. I prefer that the name be expressive enough to allow the maintainer to surmise what it is. For instance, EmployeeCount is obviously an integer, and FirstName is obviously a string.

Some people like to prefix their variable names with a letter to indicate the role that a variable plays in a method—for example, ‘l’ for local variables, ‘a’ for method arguments or parameters. and so on. I frown on this kind of thing. If your methods are so big that you can’t tell at a glance what role a variable is playing, then you need to refactor. 

Blah words

Another thing to avoid is using words that have no real meaning but seem important somehow. Avoid words like Helper, Handler, Service, Util, Process, Info, Data, Task, Stuff, or Object. These are “blah words,” or naming junk food—empty words that serve no purpose. What method do you write that isn’t helpful? Which ones don’t handle something? What in your app isn’t actually data? How much code do you write that doesn’t perform a task?

Naming is easy

Naming things is easy, or it should be. Like many things in life, it can all go well if you avoid doing bad things more than worrying about doing good things. Good naming dramatically reduces the cognitive load when maintaining code. By making the code clearer, it helps developers avoid mistakes and introducing bugs. The mere seconds it takes to stop and think about a name can save literally hours down the road.

Coding for clarity and maintainability is always the way to go. Naming things properly, completely, and clearly is a huge part of writing good code. Besides, to paraphrase the famous saying, “Name things like the person maintaining your code is a sleep-deprived psychopath with your home address.”

(image/jpeg; 16.55 MB)

New to Rust? Don’t make these common mistakes 4 Jun 2025, 11:00 am

When you start learning a programming language, you will inevitably experience some frustration along with the fun. Some of that frustration comes with learning anything new and complicated, and some is unnecessary—self-inflicted, even! When learning Rust, it helps to know about basic things you should do, as well as things you really shouldn’t.

This article is a brief look at four “don’ts”—or “don’t do this, do that instead.” It’s advice worth taking to heart when learning Rust.

Don’t assume Rust learning material is up to date

Rust is a fast-moving language, and the documentation doesn’t always keep up. Don’t be misled by guides or other learning materials that contain outdated examples. Anyone can fall into this trap, but it’s twice as easy if you are unfamiliar with the language’s evolution or its current state of the art.

For instance, older editions of Rust had a macro named try!, which was used for unwrapping a result and propagating any errors that might be returned. It’s since been superseded by the ? operator, which is a native piece of Rust syntax and not a macro. But if you rely on older documentation, you might run into examples featuring try! or other outdated concepts as if they were current.

Any Rust documentation older than two years might already be getting bewhiskered, so check the dates. Double your skepticism when you come across undated material, with the possible exception of the official Rust documentation pages.

Don’t revert to using C/C++ (or any other language) in Rust

Some Rust concepts may be frustrating to learn at first. The ownership/borrowing metaphors for memory management and type-based error handling both impose a significant learning curve. But don’t fall into the trap of trying to work around these concepts by resurrecting metaphors from other languages.

In particular, don’t try to use C-style, or even C++-style, memory management or raw memory access by way of pointers. References are safe because they are always tracked through borrowing and ownership. But pointers—which you must explicitly opt-in to use—can in theory point to anything, so they’re inherently unsafe.

The solution here isn’t to liberally sprinkle your code with unsafe{} so you can dereference pointers inside those regions. It’s to use references and not pointers from the get-go. Get to know types like Box, Rc, and Arc, which allow using ownership rules for arbitrary regions of memory. That way, you’ll never have to do the raw pointer dance in the first place.

If you have literally no choice in the matter, confine pointers to the smallest possible unsafe{} regions and get safe Rust values out of them instead.

Your best bet is to learn the native Rust way to do things and avoid falling back on habits from other languages. C++ developers often get hung up on trying to reproduce C++’s idioms (and quirks, and even its limits) in Rust. If you are one of those beleaguered folks, the C++ to Rust Phrasebook might come in handy. It shows you how to transition elegantly from C++ to Rust concepts.

Don’t try to learn all the Rust string types (yet)

Rust’s ecosystem has many types of strings for processing text, many of which are for highly specific tasks. Unless you’re trying to do something more than just passing strings between functions or printing them to the console, you don’t need to worry about them.

Focus on the two most common string types: str (immutable, essentially what string literals give you in code), and String (mutable, always stored on the heap). Use str to create string constants, and use &str to get borrowed references to existing string values. Everything else can wait, for now.

Don’t sweat using .clone() to sidestep borrowing (at first)

When you’re writing your first Rust programs, the complexities of ownership and borrowing can be dizzying. If all you want to do is write a simple program that doesn’t need to be performant or hugely memory-optimized, Rust’s memory management might seem intrusive.

This isn’t always going to be true; in fact, your growth as a Rust developer depends on learning when memory management is essential. But in the very early stages of Rust-dom, when you’re still getting your sea legs inside the language’s syntax and tooling, that feature can feel like a burden.

One way to reduce your worry about borrowing—both now and later—is to clone objects rather than transfer ownership. Cloning creates a new instance of the same data but with a new, independent owner. The original instance keeps the original owner, so there are no issues with object ownership. And, as with the original object, the clone will be dropped automatically once it goes out of scope.

On the downside, constructing a clone can be expensive, especially if you’re doing it many times in a loop. You’ll want to avoid it in performance-sensitive code. But until you’re writing such code—and learning how to use metrics to detect hot paths in the code—you can use cloning to clarify the ownership chain of your objects.

(image/jpeg; 22.62 MB)

JavaScript promises: 4 gotchas and how to avoid them 4 Jun 2025, 11:00 am

I’ve previously covered the basics of JavaScript promises and how to use the async/await keywords to simplify your existing asynchronous code. This article is a more advanced look at JavaScript promises. We’ll explore four common ways promises trip up developers and tricks for resolving them.

Gotcha #1: Promise handlers return promises

If you’re returning information from a then or catch handler, it will always be wrapped in a promise, if it isn’t a promise already. So, you never need to write code like this:


firstAjaxCall.then(() => {
  return new Promise((resolve, reject) => {
	nextAjaxCall().then(() => resolve());
  });
});

Since nextAjaxCall also returns a promise, you can just do this instead:


firstAjaxCall.then(() => {
  return nextAjaxCall();
});

Additionally, if you’re returning a plain (non-promise) value, the handler will return a promise resolved to that value, so you can continue to call then on the results:


firstAjaxCall.then((response) => {
  return response.importantField
}).then((resolvedValue) => {
  // resolvedValue is the value of response.importantField returned above
  console.log(resolvedValue);
});


This is all very convenient, but what if you don’t know the state of an incoming value?

Trick #1: Use Promise.resolve() to resolve incoming values

If you are unsure if your incoming value is a promise already, you can simply use the static method Promise.resolve(). For example, if you get a variable that may or may not be a promise, simply pass it as an argument to Promise.resolve. If the variable is a promise, the method will return the promise; if the variable is a value, the method will return a promise resolved to the value:


let processInput = (maybePromise) => {
  let definitelyPromise = Promise.resolve(maybePromise);
  definitelyPromise.then(doSomeWork);
};

Gotcha #2: .then always takes a function

You’ve probably seen (and possibly written) promise code that looks something like this:


let getAllArticles = () => {
  return someAjax.get('/articles');
};
let getArticleById = (id) => {
  return someAjax.get(`/articles/${id}`);
};

getAllArticles().then(getArticleById(2));

The intent of the above code is to get all the articles first and then, when that’s done, get the Article with the ID of 2. While we might have wanted a sequential execution, what’s happening is these two promises are essentially being started at the same time, which means they could complete in any order.

The issue here is we’ve failed to adhere to one of the fundamental rules of JavaScript: that arguments to functions are always evaluated before being passed into the function. The .then is not receiving a function; it’s receiving the return value of getArticleById. This is because we’re calling getArticleById immediately with the parentheses operator.

There are a few ways to fix this.

Trick #1: Wrap the call in an arrow function

If you wanted your two functions processed sequentially, you could do something like this:


// A little arrow function is all you need

getAllArticles().then(() => getArticleById(2));

By wrapping the call to getArticleById in an arrow function, we provide .then with a function it can call when getAllArticles() has resolved.

Trick #2: Pass in named functions to .then

You don’t always have to use inline anonymous functions as arguments to .then. You can easily assign a function to a variable and pass the reference to that function to .then instead.


// function definitions from Gotcha #2
let getArticle2 = () => {
  return getArticleById(2);
};

getAllArticles().then(getArticle2);


getAllArticles().then(getArticle2);

In this case, we are just passing in the reference to the function and not calling it.

Trick #3: Use async/await

Another way to make the order of events more clear is to use the async/await keywords:


async function getSequentially() {
  const allArticles = await getAllArticles(); // Wait for first call
  const specificArticle = await getArticleById(2); // Then wait for second
  // ... use specificArticle
}

Now, the fact that we take two steps, each following the other, is explicit and obvious. We don’t proceed with execution until both are finished. This is an excellent illustration of the clarity await provides when consuming promises.

Gotcha #3: Non-functional .then arguments

Now let’s take Gotcha #2 and add a little extra processing to the end of the chain:


let getAllArticles = () => {
  return someAjax.get('/articles');
};
let getArticleById = (id) => {
  return someAjax.get(`/articles/${id}`);
};

getAllArticles().then(getArticleById(2)).then((article2) => { 
  // Do something with article2 
});

We already know that this chain won’t run sequentially as we want it to, but now we’ve uncovered some quirky behavior in Promiseland. What do you think is the value of article2 in the last .then?

Since we’re not passing a function into the first argument of .then, JavaScript passes in the initial promise with its resolved value, so the value of article2 is whatever getAllArticles() has resolved to. If you have a long chain of .then methods and some of your handlers are getting values from earlier .then methods, make sure you’re actually passing in functions to .then.

Trick #1: Pass in named functions with formal parameters

One way to handle this is to pass in named functions that define a single formal parameter (i.e., take one argument). This allows us to create some generic functions that we can use within a chain of .then methods or outside the chain.

Let’s say we have a function, getFirstArticle, that makes an API call to get the newest article in a set and resolves to an article object with properties like ID, title, and publication date. Then say we have another function, getCommentsForArticleId, that takes an article ID and makes an API call to get all the comments associated with that article.

Now, all we need to connect the two functions is to get from the resolution value of the first function (an article object) to the expected argument value of the second function (an article ID). We could use an anonymous inline function for this purpose:


getFirstArticle().then((article) => {
  return getCommentsForArticleId(article.id);
});

Or, we could create a simple function that takes an article, returns the ID, and chains everything together with .then:


let extractId = (article) => article.id;
getFirstArticle().then(extractId).then(getCommentsForArticleId);

This second solution somewhat obscures the resolution value of each function, since they’re not defined inline. But, on the other hand, it creates some flexible functions that we could likely reuse. Notice, also, that we’re using what we learned from the first gotcha: Although extractId doesn’t return a promise, .then will wrap its return value in a promise, which lets us call .then again.

Trick #2: Use async/await

Once again, async/await can come to the rescue by making things more obvious:


async function getArticleAndComments() {
  const article = await getFirstArticle();
  const comments = await getCommentsForArticleId(article.id); // Extract ID directly
  // ... use comments
}

Here, we simply wait for getFirstArticle() to finish, then use the article to get the ID. We can do this because we know for sure that the article was resolved by the underlying operation.

Gotcha #4: When async/await spoils your concurrency

Let’s say you want to initiate several asynchronous operations at once, so you put them in a loop and use await:


// (Bad practice below!)
async function getMultipleUsersSequentially(userIds) {
  const users = [];
  const startTime = Date.now();
  for (const id of userIds) {
    // await pauses the *entire loop* for each fetch
    const user = await fetchUserDataPromise(id); 
    users.push(user);
  }
  const endTime = Date.now();
  console.log(`Sequential fetch took ${endTime - startTime}ms`);
  return users;
}
// If each fetch takes 1.5s, 3 fetches would take ~4.5s total.

In this example, what we want is to send all these fetchUserDataPromise() requests together. But what we get is each one occurring sequentially, meaning the loop waits for each to complete before continuing to the next.

Trick #1: Use Promise.all

Solving this one is simple with Promise.all:


// (Requests happen concurrently)
async function getMultipleUsersConcurrently(userIds) {
  console.log("Starting concurrent fetch...");
  const startTime = Date.now();
  const promises = userIds.map(id => fetchUserDataPromise(id));

  const users = await Promise.all(promises);

  const endTime = Date.now();
  console.log(`Concurrent fetch took ${endTime - startTime}ms`);
  return users;
}
// If each fetch takes 1.5s, 3 concurrent fetches would take ~1.5s total (plus a tiny overhead).

Promise.all says to take all the Promises in the array and start them at once, then wait until they’ve all completed before continuing. In this use case, promises are the simpler approach than async/await. (But notice we’re still using await to wait for Promise.all to complete.)

Conclusion

Although we often can use async/await to resolve issues in promises, it’s critical to understand promises themselves in order to really understand what the async/await keywords are doing. The gotchas are intended to help you better understand how promises work and how to use them effectively in your code.

(image/jpeg; 0.04 MB)

Kotlin cozies up to Spring Framework 4 Jun 2025, 12:26 am

JetBrains is deepening its collaboration with the Spring platform team, with the goal of making the Kotlin language a top choice for professional server-side work.

The JetBrains-Spring partnership, announced May 22, is intended to make Kotlin a more natural and powerful choice for building Spring applications, JetBrains said. Spring is a well-established framework for developing enterprise Java applications. 

As part of the partnership, JetBrains is building a newer and faster version of its reflection library, kotlinx.reflect, to improve performance in scenarios relying heavily on reflection, such as serialization and dependency injection.

Key areas of the collaboration include:

  • Providing null safety for Kotlin and Spring apps by improving Kotlin support for null safety across the framework. This will strengthen type safety in Kotlin code.
  • Delivering the new Bean Registration DSL (domain-specific language) to provide a foundation for better support for lambda and DSL-based bean definition.
  • Making Core Spring learning materials available in Kotlin.

Kotlin already shines building Spring applications, JetBrains said, thanks to features like named and default parameters, which remove the need for the builder pattern and other overload-related boilerplate. Kotlin also encourages modular design through the use of extension functions and top-level functions, according to the company. The Spring team, meanwhile, has supported Kotlin features such as coroutines, Kotlin extensions, and configuration DSLs. So far, 27% of Spring developers have used Kotlin, JetBrains said.

(image/jpeg; 9.77 MB)

Snowflake acquires Crunchy Data for enterprise-grade PostgreSQL to counter Databricks’ Neon buy 3 Jun 2025, 10:17 pm

Snowflake (Nasdaq:SNOW) has announced its intent to acquire US-based cloud-based PostgreSQL database provider Crunchy Data for an undisclosed sum, in an effort to offer developers an easier way to build AI-based applications by offering a PostgreSQL database, to be dubbed Snowflake Postgres, in its AI Data Cloud.

The timing of the deal, according to Ishi Thakur, analyst at Everest Group, makes it abundantly clear that this is Snowflake’s answer to rival Databricks’ acquisition of open source serverless Postgres company Neon to integrate PostgreSQL architecture into its Data Intelligence Platform.

[ RelatedMore Snowflake news and insights ]

Historically, both of the data warehouse providers have been embroiled in stiff competition for a larger share of the data analytics and AI market, and this has led to the vendors announcing new and similar offerings in close proximity.

The biggest example of the rivalry is the companies’ adoption of different open-source table formats for managing large-scale data in data lakes and lakehouses, Snowflake choosing Apache Iceberg tables and Databricks’ adoption of Delta Live tables. The two vendors had also raced to open source their closed-sourced unified governance catalogs, Polaris and Unity.

However, Constellation Research’s principal analyst Michael Ni pointed out that the Crunchy Data and Neon acquisitions are just another round in the Snowflake-Databricks chess match that has transcended the big data and analytics space.

“This isn’t about big data analytics anymore — it’s about becoming the AI-native data foundation unifying analytics, operational storage, and machine learning,” Ni said.

However, Moor Insights and Strategy principal analyst Robert Kramer pointed out, though both are trying to add PostgreSQL to their stacks, their strategies differ.

“Snowflake’s focus is on enterprise readiness, integration, and governance targeting large enterprises, and Databricks is prioritizing a serverless, cloud-native PostgreSQL optimized for AI agent development and low-latency transactions, appealing to developers and startups,” Kramer said.

Creating a data intelligence platform

Snowflake’s decision to offer PostgreSQL inside the AI Data Cloud, according to Bradley Shimmin, lead of the data and analytics practice at The Futurum Group, is about further blending operational and analytical workloads within the same platform, something that most data vendors are trying to do, and are calling a data intelligence platform.

These kinds of acquisition open the broader data and application development tech landscape and market segment to vendors, Shimmin said.

They are being driven by the trend in enterprises to increasingly rethink their application development practices to incorporate not just generative AI, but to also bring in data more freely to support AI-driven use cases, Shimmin added.

The decision to offer its own version of Postgres, despite the popularity of the open source version among developers for its flexibility, lower cost, and AI features, according to Kramer, is rooted in Snowflake’s strategy to alleviate issues such as lack of security, compliance, and scalability in the open source version.

Snowflake Postgres will help enterprises build AI applications while keeping their data safe and reliable, he added.

Constellation Research’s Ni expects Snowflake Postgres to “go down well” with the Postgres community, as it does not seek to create a fork of PostgreSQL, but instead carry on the work already done by Crunchy.

Why did Snowflake choose Crunchy Data?

Snowflake, which had a choice of companies to acquire to integrate PostgreSQL into its offerings, choose Crunchy Data mainly because of the community’s trust in the company and its offerings, analysts said.

“Crunchy Data wasn’t just a tech buy—it was a trust buy. Snowflake wanted battle-tested Postgres, not a startup experiment,” said Ni.

Moor Insights and Strategy’s Kramer seconded Ni, and said that Crunchy Data is known for its strong security, scalability, and compliance features, essential for mission-critical applications in regulated industries.

“The company has a proven track record with large enterprises and government agencies, offers developer-friendly tools, and provides performance metrics and connection pooling right out of the box. These strengths align with Snowflake’s goal of providing a PostgreSQL solution that meets enterprise customer needs,” Kramer added.

Crunchy Data’s experience and success with regulated industries also aligns with Snowflake’s path to increase its focus on vertical AI offerings, and should help it win workloads in government, finance, and healthcare industries, Ni said.

Snowflake Postgres, the offering planned after the close of the Crunchy Data acquisition, is expected to be in private preview soon, Snowflake said. It did not offer any timeline for its general availability.

The cloud-based data warehouse vendor has said it will support existing Crunchy Data customers, as well as make what it called “strong commitments” to the Postgres community.

(image/jpeg; 9.4 MB)

Snowflake takes aim at legacy data workloads with SnowConvert AI migration tools 3 Jun 2025, 8:30 pm

Snowflake is hoping to win business with a new tool for migrating old workloads, SnowConvert AI, that it claims can help enterprises move their data, data warehouses, business intelligence (BI) reports, and code to its platform without increasing complexity.

Powered by Snowflake’s Cortex AI Agents, the suite can halve the time taken to migrate workloads, the company said.

[ RelatedMore Snowflake news and insights ]

SnowConvertAI includes three tools: an AI-powered migration assistant, AI-powered code verification, and AI-powered data validation. While the AI-powered migration assistant will help enterprises move data to Snowflake from other data warehouses such as Oracle, Teradata, and Google BigQuery or from other cloud data platforms, the other two tools will help in automatically converting and validating code, ETL tools, and BI reports.

Marlanna Bozicevich, research analyst at IDC, said, “By automating code conversion from legacy systems, SnowConvert significantly reduces the time, cost, and risk associated with migrating to Snowflake.”

Typically, enterprises could encounter challenges such as schema mismatches, code incompatibilities, data corruption, and workflow disruptions during data migration.

SnowConvert AI’s code verification tool reduces manual review time by providing detailed explanations and remediation suggestions for conversion errors directly within development environments, Bozicevich said.

The fact that SnowCovert AI is free will attract enterprises, as high costs in other areas have been a major pain point for Snowflake customers, she said.

The Futurum Group’s lead for data and analytics practice, Bradley Shimmin, said the automated data validation tool will drive value for enterprises as they typically have to test that the logic, transformations, and operations are correctly translated to the new platform’s syntax and semantics manually.

But enterprises may be drawn to other vendors offering their own data migration services, including Informatica, which was recently acquired by Salesforce, or cloud services providers, such as AWS and Microsoft.

However, the most comparable to SnowConvert AI, according to Bozicevich, is Databtricks’ BladeBridge-driven tool, which offers AI-powered insights into the scope of conversion, configurable code transpiling, LLM-powered conversion, and easy validation of migrated systems.

Constellation Research principal analyst Michael Ni described the launch of SnowConvert AI as a “land-grab” strategy for Snowflake.

“Snowflake isn’t just courting cloud budgets. It’s going after prospects and their workloads who feel stranded on older systems. Free SnowConvert AI weaponizes migration as a go-to-market strategy and makes modernization too easy to ignore,” Ni added.

SnowConvert AI’s AI-powered code verification and data validation tool are in preview. The company expects to release the data migration assistant soon, it said.

More Snowflake news:

(image/jpeg; 0.39 MB)

C# 14 introduces file-based apps 3 Jun 2025, 6:29 pm

Taking a lesson from scripting languages such as Python and JavaScript, Microsoft has introduced a file-based apps capability for the C# language, which is intended to streamline C# development.

Introduced in .NET 10 Preview 4, the new feature allows developers to run a stand-alone C# (.cs) file directly with the command, dotnet run app.cs. Developers no longer need to create a project file or scaffold an entire application to test a test snippet, run a quick script, or experiment with an idea, wrote Damian Edwards, principal architect at Microsoft, in a May 28 blog post announcing the feature. Previously, running C# code using the dotnet CLI has required a project structure that included a .csproj file. Developers can get started with the feature by downloading .NET 10 Preview 4.

File-based apps lower the entry barrier to trying out C# and make the language a more-attractive choice for learning, prototyping, and automation scenarios, Edwards said. Developers get a quick start while no project file is required, there is “first-class” CLI integration, and the capability scales to real applications. There is no separate dialect or runtime; when a script grows up, it can evolve into a full-fledged project using the same language, syntax, and tools. With .NET 10 Preview 4, file-based apps also support a set of file-level directives to declare packages, SDKs, and properties (which are stored in project files for project apps) without leaving a .cs file.

Microsoft with dotnet run app.cs believes it is making C# more approachable while preserving the power and depth of the .NET ecosystem. Upcoming .NET previews will aim to improve the experience of working with file-based apps in Visual Studio Code, with enhanced IntelliSense for new file-based directives, improved performance, and debugging support, Edwards said. For the command line, Microsoft is looking into support for file-based apps with multiple files and ways to make running file-based apps faster. Microsoft asks developers to try out the capability and send feedback to GitHub.

(image/jpeg; 4.19 MB)

Page processed in 0.064 seconds.

Powered by SimplePie 1.3.1, Build 20130517180413. Run the SimplePie Compatibility Test. SimplePie is © 2004–2025, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.