Overview
Products
R&D
SaaS Solution
Agent POC
Conclusion
Intelligent Finance Automation

Improving financial management, regulatory compliance and overall performance for the insurance sector

Overview

Company

Background

Founded in 2010, Phinsys has built a platform of intelligent software tools that streamline and automate the finance function of insurers and improve their financial accounting, regulatory reporting and analytical processes. We work with a wide range of insurance organisations large and small across the UK, Europe, US, Bermuda and Lloyd’s insurance markets. Our flexible and scalable software is suited to any insurance business from start-ups to those undergoing change through rapid growth, strategic merger, acquisition or the management of run-off portfolios.

Overview

Project Sponsor

Richard Tyler-Chief Executive Officer

Richard Tyler is the CEO and Co-Founder of Phinsys Limited, who oversees the technical function and is responsible for overall technical and functional direction of all Phinsys projects. Richard has extensive experience in leadership positions within the Insurance and Financial sectors over his 29 years in the industry, which he leverages in his role as the CEO of Phinsys when ideating and executing technological solutions. A background in Finance and Accounting within the Insurance Industry allied to his technical knowledge meant he was uniquely qualified to create Phinsys, and build solutions that meet the needs of Finance and IT departments within the Insurance Industry.

Overview

R&D Leadership

Fiaz Ali - Data Architect

With 25 years of experience in the insurance and IT industry, Fiaz has built a diverse career spanning software development, technical services, project management, and business intelligence. With strong technical expertise and acute understanding of emerging technologies, Fiaz has been tasked with leading our data driven initiatives for the R&D team whilst supporting design and architecture. Prior to Phinsys, Fiaz led several successful large-scale data warehousing initiatives for industry leading insurance firms at Moore Stephens Consulting, encompassing the development of ETL control architecture and implementation of technical design patterns.

Matthew Merrifield - Technical Architect

As a seasoned veteran with a track record of enterprise software delivery, Matt has accumulated 16 years of experience architecting and building innovative insurance products. In his role as Technical Architect, he verses himself with the latest technology trends to ensure his technical stack is aligned with our project goals. His blended background of full stack software development; specialising in C#,Angular frameworks, and enterprise architecture is integral to the execution of the project pipeline. Matt also brought his prior experience at Atrium Underwriters Ltd to the current role, where he led delivery of numerous software focused projects as a software developer within a Lloyd's and US focused insurer.

Overview

Clients

Overview

Testimonials

“Implementing the Phinsys platform is an exciting and important step forwards for Markel, at a time of significant growth. This platform will optimise our financial and reserving processes whilst reducing costs and enhancing scalability. As we set our sights on becoming the leading specialty insurer, we’re investing in our processes and prioritising productivity so that, ultimately, we can pass the benefits on to our clients.”

COO, Markel International

“As a start-up company, our engagement with Phinsys was not about improving or consolidating existing systems and processes, but about building an efficient finance calculation engine in one platform, allowing us better access to more transparent data than might otherwise have been the case in traditional systems. While we look forward to our continued partnership with Phinsys, we are pleased to have successfully concluded this first, significant stage of that development”.

CFO, Conduit Re

Overview

Awards and Partnerships

Awards

  • Insurance ERM Digital transformation – Insurtech solution 2021
  • 3 x European Insurance Technology Awards 2022
    • Best Data Solutions Provider
    • Best Software Provider – Digital Back End
    • Best Automations Solution Provider
  • Insurance Times Tech & Innovation Awards – Technology Partner of the Year (Silver) 2023
  • Insurance ERM – End-user Computing Risk Management Solution of the Year 2024

Partnerships

  • Lloyd's Lab: Cohort 3 Member 2019
  • PwC: Scale-Up Cohort 2019 (Joint Business Relationship - IFRS17 implementation partner)
  • Oxbow Partners: Insurtech Impact 25 member 2020
Product Suite
R&D

Industry Baseline

Organisations relied on a mix of discrete tools for ETL, data processing, storage, and orchestration, often from different vendors and internal development initiatives. This created complexity, high integration costs, and overhead. Seamlessly combining these technologies into a cohesive, scalable system was a significant challenge. There were also gaps in industry knowledge and capability to support the scalability, security, infrastructure agility, data sovereignty and regulatory compliance requirements specific to insurance as a sector.

R&D

Activities Deemed R&D

Proof of Concept (PoC) Development: Exploratory engineering work to evaluate technical feasibility under uncertainty, including testing Azure routing capabilities, hybrid agent orchestration models, and CI/CD performance tuning.

Custom Software Components and Libraries: Building C# libraries for dynamic routing, virtual machine orchestration, secure on-prem data access, and infrastructure state management, where no commercial equivalent existed.

Protocol and Authentication Mechanism Evaluation: Empirical testing of transport and communication protocols (e.g. SignalR) and authentication layers (e.g. Azure AD), including benchmarking and failure mode analysis.

Scalability and Synchronisation Testing: Experimental validation of performance and scalability under load, such as routing multiple service versions globally, coordinating agents across hybrid environments, and reconciling large regulatory datasets.

R&D

Activities Excluded from R&D

Standard Configuration Tasks: Applying known best practices for CI/CD, setting up pipelines, or configuring infrastructure using established guides.

UI Development or Cosmetic Enhancements: Changes to frontend styling or layout, or building standard user-facing forms without novel logic.

Routine Maintenance and Documentation: Creating end-user guides, updating release notes, or general system administration not tied to technical uncertainty resolution.

R&D

Requirements for Innovation

Innovation was required to:

Support real-time, version-aware service routing whilst securely process data on-prem while orchestrating across multiple agents and to optimise deployments across a vast infrastructure landscape.

Why Existing Tools Could Not Be Adapted:

  • Security: Off-the-shelf agents often transmitted data through third-party clouds, failing to meet strict data sovereignty.
  • Closed Architectures and Limited Extensibility: Many tools had closed APIs, or were limited by their design, preventing extensions or deep customization.
  • Inflexible Toolchains: Static configuration models in both infrastructure and routing logic couldn’t accommodate dynamic, version-aware workloads.
  • Authentication: No unified authentication strategy across cloud and on-prem deployments.
  • Lack of Secure, Agent Orchestration: Agent-to-agent communication was not supported natively; orchestration frameworks were cloud-tethered.
R&D

Technologies Commonly Used

Azure Front Door, Application Gateway, and Service Fabric: Lacked native support for dynamic routing between multiple versions of services on shared infrastructure.

Terraform: A leading tool for infrastructure-as-code but based on a domain-specific language (HCL) that offered limited programmability.

Pulumi: Open-source infrastructure as code tool that lets you provision and manage cloud resources using familiar programming languages.

SQL Platforms (Azure SQL, AWS RDS, On-Prem SQL Server): Functionality varied significantly across platforms, especially regarding newer features like STRING_SPLIT, STRING_AGG, or partitioning.

Azure DevOps Pipelines: Effective for pipeline orchestration, but hosted agents had limited caching and customization options.

Agent Technologies (for On-Prem Data Access): No commercial agent frameworks at the time supported secure, scalable remote execution without cloud intermediaries.

R&D

Recurring Technical Challenges

SaaS Solution

Technology Baseline

Version Management: There was no existing mechanism within a SaaS architecture to support multiple clients running different versions of the product simultaneously, which is crucial for controlled releases and client-specific environments (DEV, UAT, PROD).

Technological Integration: The technologies being considered (Azure Cosmos DB, Service Fabric, Frontdoor, and App Services) were not designed to work together in the required manner, i.e. allowing a single hosted entry point to work using global replication, high availability, and multi-version support using high availability clustering.

Data Integrity: Clients needed the ability to control when they absorbed new product changes to ensure the integrity of sensitive financial data. This level of control was not supported by traditional SaaS models.

Routing and High Availability: The traditional SaaS application hosting model did not support the necessary level of routing and high availability clustering required to manage multiple instances and versions of the application: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-hosting-model

SaaS Solution

Scientific or Technological Advances Sought

High Availability: The application had to be highly available to minimise downtime, and replicable globally to allow for low latency access. In addition, the solution had to also support the deployment and maintenance of different versions of the product to which clients could individually be redirected to.

Single Hosted URL: Provide a single hosted URL with the application itself globally deployed and serving multiple entities at the same time.

Multiple Instances: Support multiple instances of that application of different versions, and the clients should be directed to their respective version of the application seamlessly whilst still holding the metadata in an unstructured document database so that it works for all clients.

Integration: Proof of concept development to ensure a suitable integration of technologies such as Azure Cosmos Db, Service Fabric, Frontdoor and App services.

SaaS Solution

Scientific or Technological Uncertainties

Version Management: The traditional hosting model did not allow for the required level of version management, and it was not readily deducible how routing could be undertaken in this technological context.

Integration Challenges: It was not readily deducible which technologies could be integrated together and how these could deliver the full spectrum of capabilities required including versioning, routing, global replication, high availability etc.

Centralised Routing: The team were uncertain if each node in the cluster could support the multiple versions of the services whilst managing the routing through central environment configuration.

Deployment Uncertainty: The ability to perform this deployment and provide dynamic routing rules that were driven by the target client and environment using this combination of technologies was not something that was readily deducible without systematically proving out the approach.

SaaS Solution

Scientific or Technological Uncertainties

SaaS Solution

Scientific or Technological Uncertainties

SaaS Solution

How Uncertainties Were Overcome

Scalability Tests: Baseline tests with basic scalability of Front Door and Service Fabric in a multi-tenanted solution.

Multiple Version Support: The second stage was to ensure that the cluster nodes could support multiple versions of the application service. This was challenging as the services rely on port numbers, which would be different each time the application or an updated version was deployed. The internal path to the service would require a version number to be embedded in proxy URL of the local host name of each service. This would mean that the local host name for that service would be generic, and the reverse proxy would handle the routing to the service fabric node URLs.

Custom Library Development: The external host names coming in to access the application would contain the client and environment details, allowing for the correct service version URLs to be deduced. The redirection/routing would be managed through a set of path rule configurations in the Application Gateway. A custom C# library was created to automatically produce the gateway routing configuration based on changes to client environment setup and version updates as this was not natively supported.

SaaS Solution

How Uncertainties Were Overcome

Multi-site Replication: Work was then done to ensure that the entire environment was geo-replicated and synced across multiple regions to route local clients to their closest region to reduce latency. Deploying and setting up the multiple regions entailed different challenges in relation to site-to-site connectivity and subnet peering so that the master region could quickly and easily replicate out any changes. The custom C# library had to be adapted to perform the replication of the Application Gateway configuration globally since there was no provision for this natively. This component would also be able to prevent routing to ‘stale’ versions of the application or services.

Accurate Versioning: Other architectural challenges followed around contention and locking of configuration within the application, and these were overcome using timestamps and row versions to ensure that data integrity prevailed. The records would also need to be stamped with the version of the application that created it, so as to prevent cross-version data corruption.

Performance Testing: The combination of these different technologies and the unique configuration in this customised stack was something that could only have been overcome on completion of this R&D project. Extensive testing was performed to ensure there was low latency when changes were being replicated from one region to another.

SaaS Solution

How Uncertainties Were Overcome

Agent POC

Technology Baseline

Target Model: Phinsys sought to develop a solution to allow the processing of data on remote sites and secure access to data directly without transferring it via the cloud.

Current Solution Limitations:

  • Data Gateways and Connectors, such as those offered by Microsoft Azure, AWS, and Google Cloud, are commonly used to facilitate data transfer between on-premise environments and cloud services.

  • These solutions only support data transfer between sites and do not allow for the execution of processes remotely or direct access to data from a remote site without passing through a cloud-hosted application.

  • This results in unnecessary data movement, increased latency, and potential security risks, especially when handling sensitive financial information.

  • Tools like Microsoft’s On-Premises Data Gateway, AWS Direct Connect, and Google Cloud’s Dedicated Interconnect are designed to ensure reliable connectivity between on-premises systems and the cloud but are built around the assumption that data will flow through the cloud infrastructure. This approach introduces potential vulnerabilities and compliance risks and means data cannot be kept strictly on-premise. See https://learn.microsoft.com/en-us/data-integration/gateway/service-gateway-onprem; https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html; https://cloud.google.com/network-connectivity/docs/interconnect/concepts/dedicated-overview

Agent PoC

Scientific or Technological Advances Sought

Hybrid Solution: A new hybrid approach where the configuration and orchestration of the various financial processes could be hosted in a SaaS enabled architecture whilst allowing sensitive financial data to reside within on-premise infrastructure and allow for efficient and cost-effective means to process the information without it being transferred via the cloud.

Custom Agent Development: This solution involved developing a custom agent that would enable remote process execution and direct data access without involving the cloud.

Agent POC

Limitations of Available Technology

Agent PoC

Scientific or Technological Uncertainties

Remote Commands: One of the primary uncertainties was whether it was technically feasible to build a solution that could enact remote commands and serve data to the client browser without going via the cloud-hosted web application.

Security Integrity: It was not readily deducible whether the proposed solution could match the robust security measures provided by cloud platforms.

Performance: It was also uncertain whether the solution could maintain high performance and scalability while operating entirely within client infrastructure.

Scalability: Phinsys were unsure as to how the solution could be scaled by installing multiple agents that could share the load.

Run-time Flexibility: Ensuring that the agent would integrate seamlessly with the client’s infrastructure was another source of uncertainty. For example, clients may want to run the Agent in a cloud environment and not just as a Windows service – so the agent had to deployable as a container, app service or a Windows executable.

Agent Poc

How Uncertainties Were Overcome

Communication Protocol: The first iteration centred on establishing a robust communication protocol between the Phinsys Agent and the cloud-hosted web application. The aim was to test different secure communication layers that could be used to efficiently transmit metadata.

TLS Protocol: The initial tests were conducted using standard HTTP/HTTPS protocols due to their widespread use and built-in support for secure communication via TLS (Transport Layer Security). These protocols required rigorous testing to ensure that they could handle the specific needs of the Phinsys Agent, particularly in terms of performance and reliability under varying network conditions.

WebSockets: WebSockets were evaluated given the need for real-time communication and reduced latency for processing status updates. WebSockets offer full-duplex communication channels over a single TCP connection, crucial for scenarios requiring rapid updates of processing statuses and rule conditions between the cloud application and on-premise environments. WebSockets were tested for both performance and security, particularly focusing on their ability to maintain secure, persistent connections without introducing significant overhead.

gRPC: Another protocol we evaluated was gRPC, which is known for its high performance and support for various data serialization formats like Protocol Buffers. gRPC was particularly appealing due to its ability to efficiently manage remote procedure calls (RPCs) across distributed systems. gRPC was tested to assess whether it could provide a more performant and scalable solution compared to traditional HTTP/HTTPS or WebSockets, especially in scenarios involving complex data interactions between internal agent components and also between separate instances of the on-premise agents.

Conclusion: It was deduced that the communication of the internal Phinsys Agent modules would benefit from using gRPC and the communication with the cloud hosted web API should use HTTPS as the standard means of secure data transfer. Where two-way communication is required for status updates and processing rule evaluation, WebSockets will be used in conjunction with SignalR. SignalR was identified as the most suitable implementation of WebSockets as it provides automatic failover to other protocols if WebSockets is unavailable.

Agent Poc

How Uncertainties Were Overcome

Custom Secure Integration: Once the protocol testing was completed, the next phase of iteration involved proving the Phinsys Agent’s ability to process data entirely within the client’s infrastructure even though the web application was hosted in the cloud. The next step was to ensure that the Phinsys Agent could serve processed data directly to local web clients via the cloud-hosted web application without passing it via the web server. This required the development of a secure, efficient mechanism by which the Phinsys Agent could interact with the web application in the web browser without exposing the data to unnecessary risks.

Custom Endpoints: To provide access to the clients' internal data and query it successfully, a bespoke solution had to be developed to be able to return data back from both relational databases and Microsoft Analysis Services. For the relational databases, a custom OData compliant endpoint was created to transform the front-end query arguments into a SQL script that could be executed on the client's database. This was tested using MSSQL but implemented in a way that would provide extensibility to support other relational database providers. Access to Microsoft Analysis Service data was achieved by creating a custom C# library that acted as a proxy to the endpoints provided by MSMDPUMP.DLL. This allowed the UI to pass native OLAP queries to the Phinsys Agent which would relay those requests to the cube server to obtain the results and pass them back to the web application.

Multi-Channel Integration: In the final stages of the PoC, the capability of multiple Phinsys Agents to communicate across different environments, such as between on-premise systems and cloud infrastructures like Azure, were proven. The PoC demonstrated that multiple Phinsys Agents could communicate and transfer data securely across different environments. This was achieved by establishing secure communication channels using the previously tested protocols, allowing for encrypted, reliable data transfer between different infrastructure setups. The key to achieving this was the development of an orchestration rule engine that could analyse the resources available to each agent and identify the required communication channel so that the correct Agent(s) were targeted for executing the planned process and direct communication between them could be established.

Agent POC
Conclusion - SaaS Architecture

Pure SaaS

Conclusion - SaaS Architecture

Hybrid - "On Premise" Physical / VMs

Conclusion - SaaS Architecture

Hybrid - Azure Serverless

Conclusion - SaaS Architecture

Hybrid - Azure Serverless With Data On-Premise

Conclusion - SaaS Architecture

Hybrid - SaaS Solution for All Cloud Providers

Q&A