Section 5: Discussion & Considerations Informing Further Development of TokenSpace

Section 5: Discussion & Considerations Informing Further Development of TokenSpace
Contributors (1)
Published
Aug 10, 2019

It is non-trivial to arrive at a robust classification framework employing categorical discrimination in tandem with judiciously chosen and carefully optimised formulae for quantitative characteristics and this will be a focus of future work, as will time-dependent scoring mechanisms and scoring ranges where there may be significant disagreement or uncertainty over the optimal location of an asset in TokenSpace. Often token issuers engineer their assets to have a perceived value proposition by apportioning future cash flows to the holders, via mechanisms such as token burns, staking or masternodes which are coarsely analogous to share buybacks but with critical and significant differences to the downside [132, 133]. Likewise masternodes, staking rewards or issuer-sanctioned airdrops map coarsely to dividends in legacy finance. However, if a token is deemed to be too security-like then exchanges may be reluctant to list for fear of future liability or compliance issues. %It cannot be ignored that during the final stages of preparation of this document, one of the most high profile and previously highly selective cryptoasset exchange Coinbase listed XRP which has been repeatedly accused of decentralisation theatre and academic whitewashing in order to disguise Ripple Labs' business model of selling highly security-like tokens to retail investors. Motivations for this are unclear at present but do appear to be influenced by the protracted bear market in cryptoassets which naturally affects exchanges severely since trading volumes - which are responsible for the majority of revenues - are significantly attenuated due to market conditions. Ripple has previously offered inducements to Coinbase to list XRP and at the time of writing the specifics of the listing agreement are unknown but would have to be sufficiently favourable to mitigate any compliance issues and associated costs which may arise from this development.

It is important to explicitly discuss the limitations of TokenSpace. For the purposes of a subjective classification system such as TokenSpace, as many attributes of cryptographic networks and assets are continuous, exhibit subtle variations and / or edge cases, a mixture of categorical and numerical discrimination is most likely the optimal approach. Therefore, the instantiations of TokenSpace which will demonstrate the most explanatory power will be hybrids of traditional and phenetic taxonomy types. This design choice is justified by the desired outcome of numerical scores as the output of the classification execution in order to populate asset locations in the Euclidean 3D space that TokenSpace creates. Conversely in the interests of pragmatism, a great deal of insight may still be derived from a primarily categorical classification approach with some range-bound indices and if this meets the needs of the user then it is an acceptable and valid design choice. Further it minimises over-reliance on measurable attributes which may be subject to manipulation for motivations related to decentralisation theatre.

As with all information systems, the principle of GIGO (Garbage In, Garbage Out) applies. A number of potential pitfalls are as follows, and the author does not exclude oneself from susceptibility to any or all of these. The use of misinformed judgement, lack of methodological rigour in taxonomy construction, over-estimation of the researcher's knowledge of the field or competence in applying taxonomic methodology, latent biases, poor quality / misleading data sources or judgements and a lack of appreciation of edge cases or category overlap may severely limit the usefulness of the TokenSpace produced and therefore its explanatory power.

It must be re-iterated yet again that TokenSpace affords a subjective conceptual framework for the comparative analysis of assets. The meta-characteristic definitions and choices, dimensions, categories and characteristics employed, score modifiers and / or weightings are all subjective and depend on the choices of the researcher which derive from intended purpose. It is entirely realistic that an asset issuer may tailor their taxonomies, score modifiers, regulatory boundary functions or a combination of the above to present a favourable assessment with respect to their biases or motivations. Additionally, considering the changing nature of regulatory and compliance landscape may have a large bearing on what can be considered to be acceptable asset characteristics in compliance terms and may necessitate a re-evaluation of weightings and / or score modifiers [51].

As discussed in Section 3.3.2 with particular reference to the 2017 vintage of regulatory arbitrage mechanism of token sales for so-called utility tokens, some distinction between “good” and “bad” securities, moneys or commodities in an area of particular interest. Extending TokenSpace to occupy a region between -1 and +1 could provide a coarse mechanism to do this, though the way that dimension scores and weightings are determined would have to be adjusted and naive methods such as taking moduli do not sufficiently discriminate as to the quality of an asset.

Future planned developments include the construction of TokenSpaces with higher dimensionality as discussed in Section 3.3.3, and alternative taxonomies for different meta-characteristics with intended purposes other than increasing regulatory clarification. The scoring mechanisms as discussed in Section 3.3.4 and Section 3.3.5 including categorical and indexed dimensions, score modifiers and weightings may also be further refined and extended. Other approaches to generating asset coordinates for TokenSpaces will also be explored, with plans in place to form “digital round tables” with broad subsets of stakeholders to arrive at asset scores or ranges.

Work is underway with collaborators to extend TokenSpace into DAOSpace in order to characterise similarities and differences of Decentralised Autonomous Organisations as opposed to assets. One interesting nexus of DAOSpace and TokenSpace is attempting to disentangle the design choices and properties of decentralised organisations (and their native assets) with respect to Securityness in particular. As discussed in Section 1.3.3, the SEC has already made it clear that TheDAO tokens (DAO) would be classified as securities and therefore profit-oriented tokenised DAOs must be designed carefully with this in mind should they intend to be compliant with existing regulations. Interestingly Malta has passed laws giving DAOs legal personality, meaning that another cycle of jurisdictional arbitrage may be underway, this time with organisations as well as or instead of assets (Section 1.1.1).

Likewise stablecoins with certain properties especially related to asset issuance may also be problematic from a compliance perspective (Section 1.3.3) so a potential extension of this work towards a StablecoinSpace classification framework for pegged assets is an avenue being explored currently.

A future goal of TokenSpace is the development of an environment which may be updated in real-time from various information feeds from market, semantic, linguistic and / or network data in order to provide dynamic information as to evolving asset characteristics as well as historical trends at varying points in time. This may facilitate the goal of descriptive, explanatory and even predictive techniques for understanding, rationalising or foreseeing trends, issues and opportunities relating to assets and networks before they become readily apparent from naïve analysis.

Discussions

Labels

No Discussions on this Branch yet

Highlight text above to create a new Discussion