2017-03-09 Meeting notes
Table of Contents
Date
Attendees
Name | Organisation | Present? |
---|---|---|
Former user (Deleted) (chair) | Symphony Communication Services LLC | Y |
Afsheen Afshar | JP Morgan Chase | N |
Matthew Bastian | S&P Capital IQ | N |
Hamish Brookerman | S&P Global Market Intelligence | N |
Anjana Dasu | Symphony Communication Services LLC | Y |
Doug Esanbock | Dow Jones | N |
Anthony Fabbricino | BNY Mellon | Y |
Blackrock | N | |
Symphony Communication Services LLC | Y | |
Dave Hunter | N | |
Richard Kleter | Deutsche Bank | N |
Nick Kolba | OpenFin | Y |
Samuel Krasnik | Goldman Sachs | N |
BNY Mellon | N | |
S&P Capital IQ | N | |
Dow Jones | N | |
Jiten Mehta | Capital | N |
Symphony Communication Services LLC | Y | |
Credit Suisse | N | |
Scott Preiss | S&P Capital IQ | N |
FactSet | Y | |
FactSet | Y | |
TradeWeb | N | |
Kevin Swanson | CUSIP | N |
Markit | Y | |
Paul Teyssier | Symphony Communications Services LLC | Y |
Credit Suisse | N | |
Gavin White | Tradition | N |
HSBC | N | |
Symphony Software Foundation | N | |
Symphony Software Foundation | Y | |
Aaron Williamson | Symphony Software Foundation | Y |
Actions items from previous meetings
- None
Agenda
Time | Item | Who | Notes |
---|---|---|---|
5 min | Convene & roll call | ||
40 min | Discussion of MessageML v.2 discussion draft | Former user (Deleted) | |
10min | Discussion of ongoing meeting schedule | ||
5 min | AOB & adjourn |
Meeting notes
Introduction of Draft
Bruce Skingle: I’ve shared my screen. The reason that we’ve called this meeting is to discuss an update to MessageML, our markup for messages. We’ve talked about this for some time, but previous draft was never finished. The reason we’ve revised it is 1) having delivered this functionality for our own horizontal apps, we want to give that functionality to people outside the LLC, 2) we realized that in order to provide most types of functionality to horizontal apps, you shouln’t need a custom renderer in the other app—you should be able to provide a much richer experience without implementing a custom renderer. For a lot of use cases, we think we can make it easier and allow people to avoid building custom renderers. Based on feedback from those who have built horizontal integrations, we’ve come up with alternatives for dealing w/ structured objects in the EntityML portion of the specification.
The discussion draft is in Confluence. We hope to implement very soon & make available in production pods, but proposal not finalized, and we may need to make further changes. Purpose of opening discussion at this stage is to take into consideration input of WG before it’s finalized.
The first page talks about the general proposal and its constituent parts. PresentationML is about presentation—bold, italic, etc. Will be refocused to become strict subset of HTML5. This is the format in which messages will be stored in our database, and what UI will see when processing messages. Some tags in current version aren’t compatible with HTML5; they have been replaced.
EntityML will be replaced with a JSON format, EntityJSON. Reasons: based on experience, it just started to feel it was complicated to use existing XML format. In future, we’ll store complete presentation document—the PresentationML—as the full content of the message. It is the format you’ll see if you have no custom renderer. Current format is combination of markdown and JSON. Can’t interpret markdown without processing JSON.
EntityJSON will only come into play if you have a custom renderer in place. The portion of the PresentationML that references an EntityJSON object would be replaced with an iFrame that the custom renderer would replace as needed.
MessageML is set of markup available on API agent. If you’re writing a bot or integration, the expanded vocabulary of tags in MessageML will be available. MessageML will be translated into PresentationML and EntityJSON before being rendered.
Peter Monks: What date do you need comments by and how should we provide them?
BS: make them on Confluence if possible, or privately to me. Timeframe – Paul?
Paul Teyssier: we need to release a beta version before the end of Q2. By beta, I mean the entirety of MessageML is implemented & would want to work with some partners closely to pressure-test the implementation.
BS: we’ll never close our ears to feedback, but the later we get it, the harder to adapt.
Frank Tarsillo: How limited is PresentationML as a subset of HTML5, and why not open it up to the whole of HTML5? I know you’ve referenced security concerns, but please go over the specifics.
BS: OK if I describe PresentationML first and go from there?
FT: sure.
Johan Sandersson: It would help me understand better what was happening if I could see some visual examples. Would anyone like to work with me to put some together?
PT: I can work with you on that.
BS: We have a couple of examples in the draft, but I agree that’s important.
PresentationML
BS: PresentationML will be styled by stylesheets that ship with the Symphony client. With any one theme, you’ll get a consistent set of styles. PresentationML is a subset of HTML5. If a given tag or attribute is not specified to be included, it’s excluded. All PresentationML content is encoded in UTF-8. It’s XHTML, too – every tag must be closed. Some tags have an HREF attribute which take a URL as a parameter, and we’ve restricted this in some ways. Data URLs, where URL is encoded in base64, we’ll permit a specific subset of formats.
Internal and Hosted URIs will also get special treatment.
This restrictive proposal isn’t the only way of doing it – we could leave restrictions to the customer’s network configuration. But our smaller customers may not have robust security controls in their firewalls, so these protections are primarily for them and to protect them.
Permitted tags
[Bruce covered more than the notes reflect; it’s all available in the discussion draft.]
Here is the list of permitted tags. We also specify allowable attributes. No in-line styles, just class attribute for styling. Special stuff in div class: attribute data-entity-id is the tag the renderer will use to see if a custom renderer should be invoked. If they style is “entity” then it will look for this attribute, use that to locate the right piece of JSON, then ignore the body of this div and replace it with an iframe to be filled in by the custom renderer once it’s invoked.
Images, may need to limit maximum size. Hashtags & cashtags will be represented by anchors. If you just put them into a browser, it will not be useful but it will be rendered. We know that ContentExport will be able to interpret this and render it without difficulty. Mentions also represented by anchors. We use predefined URL prefixes to represent a user ID in the HREF. HTML5 audio tag can be used to represent, e.g., a chime. We include the autoplay attribute to make the HTML5 consistent with expectations.
Here’s a list of styles we’re going to support. We’re using descriptive style names like “warning” instead of “yellow.” Means that if different message producers are producing messages with similar purposes, users will get consistent rendering across clients. Badges are designed to be combined with a background style to get different colors. The card style indicates a piece of markup that is a card which can be open and closed by the user. Icon: style to render an image at a standard size that is consistent with use within text.
Any immediate questions on the PresentationML?
Moving on to MessageML – the extended set of markup available at the API agent that will be translated into PresentationML and JSON. It’s a one-way transformation. Public API consumers can either 1) submit presentation ML, in which case they get back exactly what they sent in, or 2) use convenience functions that make it easier to do common things, like create mentions. Can just use an email address to identify a user via the API, then the agent would translate that into an internal userid and present it correctly. What you get back as PresentationML is different from what you put in as MessageML for this reason.
Some tags like <chime/> will get translated into HTML5 Presentation ML equivalents. MessageML shorthand allows you to be certain that what’s rendered will be functionally coherent, avoid errors in more complex PresntationML.
We may decide to support templating, which would be useful for things like the JIRA integration. Could write one bit of MessageML, to be expanded on the backend into appropriate PresentationML and JSON. We would propose just implementing Freemarker in the agent, giving you the option of using Freemarker tags in the agent, and the fully-expanded markup is what would be submitted to the backend.
EntityJSON
EntityJSON is more restrictive than JSON. There are reserved names (type & version). Top-level elements must be valid EntityJSON and must include type and version. Version is used to select renderers capable of rendering the data provided. Propose to use simple major.minor format for version numbers.
[Goes through examples in the spec]
If nested objects include type & version, data is more reusable, but it’s not required.
Final thing: some examples. Here’s a message containing a financial security. It’s submitted in PresentationML, with the entity type in EntityJSON matching the data-entity-id in the PresentationML.
Backward Compatibility & Templating
It’s possible we’ll continue supporting the old MessageML format. Here’s an example of how the XML gets translated into PresentationML and EntityJSON. It’s not necessarily shorter, but it’s a lot clearer. This translation is how we’d support MessageML if we decided to.
Here’s an example of a “for” loop implemented via Freemarker. It’s a rich templating language and we’re reasonably comfortable with it. Once this is processed into PresentationML, the client doesn’t see or have to comprehend the templating language.
Restriction of PresentationML Feature Set
Answering Frank’s question: with customers writing their own clients, there’s no guarantee that malicious markup won’t end up getting rendered by those clients, and we can’t filter it because we get it encrypted. There’s a spectrum from very safe to very functional, and we’ve started at the safe end. Where to set the line is a conversation we should have, but that’s the rationale.
The other consideration is user experience. To give an analogy, when people send complex HTML email, client will usually block a lot of the content to protect the user, leading to rendering issues. We want to avoid that. From a UX point of view, it’s helpful to the user to have a set of styles with functional names, larger subset would complicate that, and know I wouldn’t want to see lots of different styles coming at me.
FT: question is whether the network should have to secure the endpoints or whether that’s on the client developers. My concern is that people will just solve this by encapsulating their own javascript anyway. So either we solve that now, or we do it progressively as that happens and we have to deal with it.
PT: There are additional dimensions. First, a sense of nativeness—this is a significant evolution from the previous standard. Allows integrators/customers to provide things that are very native. The other question is flexibility—you can always get an iframe and render how you please. So we think it’s the best of both worlds. Also need to think of cross-device, so need to make sure we’re targeting mobile, where web-like display is hard to make highly performant from a display and UI perspective.
FT: I understand that, but it seems like you’re focusing on the Symphony clients. As we see expansion to clients & integrations that don’t involve Symphony endpoints, we need to consider how this proposal affects those use cases.
BS: My first observation is that all clients are required to support everything in our specification, so we’re increasing the burden we put on them by expanding the set of tags supported. There’s also a philosophical question: is it OK for people to put messages into Symphony that only their clients can display? Seems like we don’t want that, because you don’t know where the messages are going. Biggest value in this implementation is that all stored messages can be rendered by unknown endpoints. Don’t want to return to the old days of the web, with different renderings by different browsers.
FT: I understand that. That’s my only concern here, and I’m wondering if this limits us in the future.
BS: Where would you draw the line?
FT: I’d prefer to just support all of HTML5 and see what that looks like.
BS: I fear that we’d find all kinds of corner cases with mobile clients where they don’t do what they’re expected to. It’s possible to build HTML5 website that works great on both a 4K monitor and phone, but it’s really hard, and I fear we’d be thrusting that complexity upon our users.
FT: I’ll shut up and let others comment on this.
PM: The original EntityML proposal had a fallback option. Would that help us address Frank’s concerns?
BS: It’s the same thing. That’s still what we’re doing by allowing insertion of an iframe using EntityJSON. If we start with something narrow, we can always expand. If we start broad then narrow, we’ll end up with messages in the network that we can’t render later.
PT: What input does the group have on what the working team should consider?
Johan Sandersson: For me, the PresentationML in the JIRA & Apple examples seem to be stored in the same way. What’s the difference?
BS: The difference is data-entity-id, which points to a different entity in the EntityJSON. Then the type & version within the EntityJSON object determines what the renderer will do.
JS: My concern is that, for us to support cashtags across clients in different markets, we will still require custom renderers to be installed. If I want to send “$appl” to one user who understands ISIN and one who understands CUSIP, they’ll each need a custom renderers to display the security correctly. I think Symphony should natively render at least security entities like this.
PT: This might not be the best example. The native rendering here is just “Apple” in bold. Maybe that’s too limited and we could do something more complex with the default rendering. But the app gets the entire JSON object along with the cashtag PresentationML tag and decides what to do with it.
JS: Ok, I understand.
BS: I think there are two issues to take away: 1) how restrictive to be—need to figure out how to reach consensus on that; 2) we should expand these examples and provide some visuals.
PM: If you and Paul have a specific date for feedback, please keep us updated on that.
PT: I would say in the next two weeks is best.
Action items
- Former user (Deleted) and Johan Sandersson will work together on visual examples to illustrate proposal
- More to come
Need help? Email help@finos.org
we'll get back to you.
Content on this page is licensed under the CC BY 4.0 license.
Code on this page is licensed under the Apache 2.0 license.