2019 Roadmap for kdb+ Project
The roadmap below has been set out with the needs of "enterprise kdb+ users" in mind. For more background, see:
2019Q1 Increase Participation, Standards Development
Working Group recruitment:
We'd like a working group small enough to work quickly, but diverse enough to capture a majority of use cases. We are targeting 8-12 active participants with the assumption that not all members will be willing / able to participate in all discussions. Participants should have experience in organizations facing the challenges of "enterprise" deployment of kdb+.
Working Group members should actively recruit known members of the kdb+ community likely to add significant value.
Should the working group exceed these participation targets, we may choose to institute policies, workflows, and/or committees to increase overall effectiveness.
Standards Development:
The working group will begin by developing a common coding style, documentation format, and namespace standards.
Adherence to these standards will be required for all FINOS-hosted projects and evangelized through working group members and external forums (such as mailing lists, meetups, conferences, etc.).
2019Q2 Enterprise Framework Needs, Standalone Utilities, Incubator, Python-kdb Cross-Pollination
Enterprise Framework Needs:
We seek to identify the greatest needs of the enterprise kdb+ community and document them as a "wish list" to onboard to The Foundation.
This will express the features required for enterprise users as well as how projects can interoperate. The goal is to be implementation-agnostic and focus on APIs that can remain stable.
Standalone Utilities:
It is expected that APIs could take significant time to reach a stable state due to:
- Desire to incorporate what are considered modern "best practices". (eg. Structured logging vs. string messages.)
- Tailoring APIs based on "antipatterns" encountered by working group members on previous projects.
- Learning Python conventions and practices can be borrowed to support q-Python hybrid systems that incorporate Machine Learning.
Users invoke standalone utilities through a command shell which is less expressive than q. Therefore, it should be easier to reach agreement on the interface to standalone utilities and release them to the community.
Incubator:
Projects that are not fully ratified can live in "incubator" branches for comment / refinement. Policy and procedures around such projects will be formalized and posted.
Python-kdb Cross-Pollination:
Kx Systems has released q-Python cross-interpreter calling capability as part of their "Fusion APIs":
For many enterprise kdb+ licensees, projects are springing up with a Data Science pipelines which combine:
- q for "data wrangling" and feature engineering on time series data
- Python for exploratory data analysis / visualization and Machine Learning libraries.
We hope to find ways to support this by ensuring that our standards align with Python best practices where it is beneficial to do so. However, many of the best kdb developers have been deeply-steeped in Kx culture for a decade or more and may be new to Python. Therefore, some ramp up time is required unless we can recruit Python experts into the kdb+ Working Group.
2019Q3 Preloaded Framework Components
Preloaded components have less complex interactions with the user and other components. Therefore, it is likely that such components will be released before module-loadable components for the same reasons as those cited for "Standalone Utilities", Examples of "preloaded components" under consideration:
- command line parsing
- default handlers
- logging
2019Q4 Module Loadable Components
By this point, the expectation is that the Working Group will have gathered sufficient information about:
- patterns and antipatterns for module loading in q and Python
- how users develop components in environments such as: JupyterQ / PyQ
- deployment of q/Python hybrid systems in enterprise environments
With that information in mind, it should be quick progress to agree on a module loading standard and PoC.
Once the PoC is available, an ecosystem of components should build from there.
Closing Remarks
The ultimate goal is to foster a community where the best project "wins" through adoption and contributions - not through a blessing of the Working Group.
The WG is envisioned as a marketplace for projects where we identify common areas of difficulty and offer solutions to those problems that may benefit the wider Data Technologies Program.
Need help? Email help@finos.org
we'll get back to you.
Content on this page is licensed under the CC BY 4.0 license.
Code on this page is licensed under the Apache 2.0 license.