My level of contribution differs from product to product but in all cases I was responsible for the product position, roadmap, strategy, product management and development, business roadmap, business strategy, financing, partnerships, team building, talent hiring, onboarding, and training programs.
For Eveince products I wanted to be more focused on non-tech parts, but I did write models, specially in portfolio management for efficient frontier calculation of a crypto portfolio which isn't in our deployed models, but I used the analysis to get a sense of the arbitrary risk we're taking, despite our quant models.
In Eveince, we're not focused on forecasting or market price prediction but more toward risk modeling and robust financial returns. But then again for our business case, I need to model different market scenarios against different Eveince performances to map out our strategy. In this case, I needed a separate model than what was actually trading and feed it different scenarios e.g. crypto collapse based on tightened regulations.
-
General AI models for risk management:
These AI models are the core value of Eveince. Identifying different types of risks, product positions, evaluation metrics, benchmarks, and trade frequency was crucial. It was also essential to manage the R&D resources for each topic so that we don’t overspend on one problem or underspend on another, which required constant monitoring, evaluation, and stringent testing. The result is a set of AI models working in the market for over a year, successfully beating all their objectives and other respectful hedge funds.
Philosophy of Asset ManagementWe’ve used a wide range of AI and mathematical models. We heavily invested in statistical models and empirical returns for position risk management, which requires extensive data normalization and dimension reduction models like PCA. We used HMM for behavior modeling and used data transformation to take the data into a Gaussian distribution for building robust and predictable models. This part of the pipeline required high levels of explainability, so we didn’t use Deep Learning architectures here. We used Random Forest and bet-sizing methods from poker theory for portfolio risk and value-at-risk modeling. For order risk, we used Deep Reinforcement Learning. In the initial steps, we used OpenAI gym, but eventually, we built our environment to train agents:
-
Cloud infrastructure to run AI models:
To run AI models, you need a reliable cloud infrastructure. We designed a new process context model that uses Event-Sourcing architecture and Idempotent Request management to implement CQRS. This allowed us to implement all required business processes reliably and with deep audibility. All core business processes in the Eveince platform implement this design, resulting in 99.999% availability in service for over a year while keeping the costs of transactions and accounts so low that it became a competitive advantage against other AI funds.
Resilient self-healing business processes inside an automated hedge fund
We’ve also deployed and integrated Kubernetes to optimize capacity planning and use cloud advantages for data snapshots in case of hardware failure. - Portfolio comparison and simulation tools:
To onboard our clients and enable them to have a sense of adding our products to their portfolio. Please sign up here and head to the simulation section.
-
ActorFS, A distributed object-file system:
ActorFS was a competitor for Hadoop Distributed File System (HDFS) and systems like Ceph with a capability of in-memory Big Data analytics at scale. I was the designer of the Hierarchical Cache System and Composable Processes that allowed us to scale the system on a distributed architecture. I also designed and developed a distributed key-value store for keeping an index of data over the entire system. This key-value store was an extension of a distributed B+Tree and was 2.3x faster than Redis as an industry benchmark for an in-memory high-speed key-value store. We used Scala and AKKA Actor model, so we called ActorFS.
-
LeggoApp, Scalable data management for enterprise:
LeggoApp started as a concept design to exploit Docker as a platform-agnostic software delivery to connect enterprise data to scientific research. Then LeggoApp began to grow in each business domain, and we decided to package the common denominators of all the solutions and provide them to other companies. LeggoApp comprised ActorFS as the storage layer, ML models in the middle as the analytic layer, and a Web app on top to allow interaction with the user. LeggoApp supported all available R libraries in CRAN with the ability to host Javascript web components. The pluggable architecture of LeggoApp will enable businesses to quickly run many experiments where each experiment is ready to use in production after passing acceptance criteria.
-
Miniature:
A big data analytic and faceted search engine Designing Miniature, A big data platform and Language Model(MirasText) for analyzing social media and news outlets for media and competitor monitoring and brand analysis