James Johnston, VP EMEA at Azul, argues that only a tight FinOps‑engineering partnership can rein in over‑provisioned Java estates, cut cloud waste and boost performance all at once.
FinOps adoption is maturing but it should be no surprise that FinOps is widely used in the enterprise. According to Flexera, 33% say their adoption of FinOps is mature, while a similar number say their adoption is growing. Historically, FinOps has focused on visibility, tagging and monitoring so that organisations can accurately see what they are spending and can allocate the right charges back to individual departments. This is the foundation needed to enable organisations to properly budget and forecast cloud usage. There has also been a focus on anomaly detection to ensure organisations spot issues, avoid costly overspend, optimise workloads and reduce cloud waste.
However, now FinOps teams are doubling down on reducing cloud waste and workload optimisation as highlighted by the State of FinOps 2025 report where both were pulled out as top priorities.
This reflects a continued move towards value through optimisation, because performance is becoming a more critical factor at a time when there is growing demand for compute resources to support innovation. These costs are spiralling thanks to the growing competition for hardware resources driven by technologies like artificial intelligence. If organisations can streamline cloud resources right down to the CPU, network and data storage level it offers significant advantages.
Consequently, planning and estimating the costs of new technologies and new workloads is just the beginning. FinOps practitioners must educate stakeholders on the benefits of optimisation so they can efficiently architect for the cloud. When operations are underway, engaging engineers in workload optimisation, and performing rate optimisation become key activities.
To be successful requires FinOps and engineering teams to collaborate closely, which also demands an adjustment to the DevOps mindset. This collaboration is key to optimising cloud usage without sacrificing performance.
Optimisation through collaboration
Collaboration is important, because the main protagonists in using cloud resources are engineering and DevOps teams. To date they have not been tasked with understanding the cost implications of spinning up new cloud instances and the danger is that they may unknowingly have a key role in driving up indirect cloud spend.
Why? DevOps is a discipline, whose main purpose is to enable the fast development and delivery of new features and functions.
This is a disposable infrastructure mindset with teams typically measured primarily on performance SLAs to ensure the application is in production in a timely manner and that it is highly available. If that is the priority, then DevOps is less inclined to pay attention to how much is being spent on cloud usage. Therefore, a FinOps and engineering partnership is central to overcoming the issue.
This is especially true for data analytics platforms and AI or ML environments that require large data sets for modelling. There is a significant amount of compute needed to drive large Java estates which has the potential to increase recurring cloud charge commitments and can change budget forecasts.
So how do you avoid this becoming a problem?
Applying FinOps policies to Java application engineering
What FinOps needs to work on with the engineering team is a set of transparent rules, starting with an agreed limit on how much wasted capacity the teams will tolerate. This will enable the organisation to enforce utilisation policies without having to wait for engineers to self-enforce rules. This also helps to balance the need for the right level of infrastructure to give developers the cloud capacity they need to build new functionality in a timely manner.
Applying this approach to Java has some specific considerations. Java has been around for a very long time for data processing. It is incredibly robust, but there is an issue around warm-up time to deal with transactions at speed, especially if there is a big spike in traffic. Users have been concerned that latency-sensitive Java applications will not be able to provision additional server resources in time to meet traffic demand without affecting the customer experience.
To get around this issue, many organisations have over-provisioned cloud resources as a back-up to ensure performance, scalability and flexibility. This, though, creates utilisation inefficiencies – so large Java estates are low-hanging fruit for FinOps teams.
Encouraging a culture of collaboration
If FinOps is deployed effectively in Java environments, it enables organisations to innovate more aggressively and encourages a different approach to deploying cloud resources.
The clear lesson is to create a culture of collaboration between FinOps and the engineering team. Like the Formula One racing teams that have just started the new season, the need for collaboration is crucial for marginal gains. Greater emphasis on teamwork will see organisations buy into the value of FinOps and enable them to optimise their Java assets to reduce cloud waste and improve performance.