Without downplaying the enormous contributions of biology, botany, zoology, entomology and related life sciences fields to the development of classification systems derived from the Linnean binomial paradigm, emphasis will be directed in this work on broader and more recent approaches into the categorisation of objects and concepts into classes, categories or other types of divisions. It should be noted for clarity that classification is a general term which may appropriately describe any of a number of approaches to the organisation of domain-specific knowledge. Within classification, there are several subsets which shall be briefly defined and discussed for precision moving forwards.
Classification was defined by Bowker & Star, and Bailey:
“The ordering of entities into groups or classes on the basis of their similarity.” [64]
“A spatial, temporal, or spatio-temporal segmentation of the world.” [65]
Such a classification may be approached conceptually, axiomatically, empirically or intuitively. Discrimination between objects is often univariate (e.g. number of legs an organism possesses to determine a biological specimen's phylum, class, order and so on) but may also be multivariate where two or more attributes are employed to differentiate between objects. One potential criticism of classification methods in general is that naïve categorisation may be inadequate to appropriately encapsulate the variety or heterogeneity observed in a set of objects, with issues arising at boundaries of continuous variables or with edge cases.
A classification system was described by Bowker & Star and later developed by Nickerson et al. to comprise the following:
“A set of boxes (metaphorical or literal) into which things can be put to then do some kind of work - bureaucratic or knowledge production.” [64]
“The abstract groupings or categories into which we can put objects” with the term classification used “for the concrete result of putting objects into groupings or categories.” [66]
Framework was characterised by Nickerson building upon work by Schwarz et al. as follows:
“a set of assumptions, concepts, values and practices that constitutes a way of understanding the research within a body of knowledge.” [67]
The term typology is applied correctly to a series of conceptually-derived groupings, often multivariate - thereby being more discriminatory than simple classification systems - and are predominantly qualitative in nature. [65, 68]
Taxonomy is the most widely used term to describe classificatory approaches, though from a recent literature survey it does also appear to be a term which is often used with a lack of precision [66]. A taxonomic system can be understood as a subset of classification systems as defined above, and a taxonomy itself can be generated and derived from a taxonomic system. Most literature adopts the term taxonomy for empirically-derived classification systems in contrast to conceptually-derived typological systems. However it is clear from classification literature that taxonomy may refer to both empirically and conceptually derived classifications and this broader, modern usage of the term is employed by this work. A subset of taxonomies employing quantitative classifications are termed phenetic approaches, which are typically empirically-derived groupings of attribute similarity and are largely arrived at using statistical and data analytical approaches such as correlation mapping, data clustering or principal component analysis. Similarly cladistic approaches are akin to historical, deductive, evolutionary or genealogical inter-relationships of sets of objects. Relevant examples include the fragmentation and proliferation of Linux codebase / kernel / distribution descendants, upstart networks employing the CryptoNote protocol codebase and ledger forks of cryptocurrency networks such as Bitcoin [5]. Taxonomy helps researchers study relationships between objects and concepts [69], and such approaches may further help researchers find voids in parameter space which may be a result of anomalous emergent characteristics or mismatch between ensembles of attributes [66].
Following Weber [70], Bailey characterised the notions of ideal types and constructed types with reference to typology and taxonomy development respectively [65]. For the most part typologies conceptually derive an ideal type (category) which exemplifies the apex (or maximum) of a proposed characteristic whereas taxonomies develop a constructed type with reference to empirically observed cases which may not necessarily be idealised but can be employed as canonical (or most typical) examples. Such a constructed type may subsequently be used to examine exceptions to the type. Bailey exemplifies this distinction by equating an ideal type to the optimum or most extreme value in a collection of data, whereas the constructed type may be taken from the mean or median of a population. In developing a typological system through conceptual or theoretical foundations, the structure of a typology may be elucidated through deduction or intuition. This approach may be employed to build multi-layered systems using conceptual, empirical and combinations of elements thereof - termed indicator / operational levels. Such a method can be used to transition in either direction between conceptual and empirical bases for the classification system as classification is iteratively developed. Nickerson et al. summarise Bailey's approach:
“A researcher may conceive of a single type and then add dimensions until a satisfactory typology is reached, in a process known as substruction. Alternatively the researcher could conceptualise an extensive typology and then eliminate certain dimensions in a process of reduction.” [65, 66]
In contrast to Kuhn's paradigmatic assessment of the evolution of concepts, Popper's Three Worlds provides some philosophical bedrock from which to develop generalised and systematic ontological and / or epistemological approaches. The first world corresponds to material and corporeal nature, the second to consciousness and cognitive states and the third to emergent products and phenomena arising from human social action [71, 72]. Niiniluoto applied this simple classification to the development of classifications themselves and commented:
“Most design science research in engineering adopts a realistic / materialistic ontology whereas action research accepts more nominalistic, idealistic and constructivistic ontology.” [73]
Materialism attaches primacy to Popper's first world, idealism to the second and anti-positivistic action research to the third. Design science and action research do not necessarily have to share ontological and epistemological bases. Three potential roles for application within information systems were identified: means-end oriented, interpretive and critical approaches. In terms of design science ethics Niiniluoto comments on taxonomy as a descriptivistic endeavour:
“Design science research itself implies an ethical change from describing and explaining the state of the existing world to shaping and changing it.” [73]
Ivari considered the philosophy of design science research itself:
“Regarding epistemology of design science, artifacts of taxonomies without underlying axioms or theories do not have an intrinsic truth value. It could however be argued that design science is concerned with pragmatism as a philosophical orientation attempting to bridge science and practical action.” [74]
Methodology of design science rigour is derived from the effective use of prior research (i.e. existing knowledge). Major sources of ideas originate from practical problems, existing artifacts, analogy, metaphor and theory [75].
Following on from Plato and Aristotle's notion of essentialism - a characteristic essence of every entity, concept and material [76] - the epistemology of design science as evinced by taxonomy development until the Industrial Revolution was at least partially informed by a naïve and pre-Darwinian essentialist sensibility. There is a lack of agreement as to the extent that early classifiers such as Linneaus and Haeckel were complete essentialists that fully believed that the biosphere was composed of static, time-independent ensembles of living things. Ivari makes a post-essentialistic statement as follows, highlighting the value of abstract or conceptual approaches as possible intermediaries in the ontological quest:
“Conceptual knowledge does not have an intrinsic truth value, but is a relevant input for the development of theories representing forms of descriptive knowledge, which may have a truth value.” [74]
The scholarly pastime of classifying objects systematically is usually traced back to Carl Linneaus the Swedish botanist, physician, and zoologist who was active in the 17th century and prepared a number of thorough approaches to classifying living things including the formalised binomial nomenclature which is still in use for the naming of species [77, 78]. Prior to this, Aristotle's Predicamenta laid the conceptual foundations for the activity of categorising concepts and objects [79]. These were simple inductive approaches which began with conceptual or empirical inquiry and led in some successful cases to axiomatic reasoning, however given the paucity of reliable and objective information available at that time, it is entirely understandable that thorough and concise frameworks did not develop immediately. Oil and coal products trees in the 19th century as shown in Figure 2 were inspired by this [80, 81]. As the Industrial Revolution was carbon driven, complex organic materials comprised of myriad constituents principally derived from coal and oil were purified, refined and processed into a new generation of high performance products. Fortunes were made and lost on classification accuracy with fractionation and purification of carbon feedstocks yielding a plethora of fuels, dyes, lubricants and other organic compounds previously undiscovered or unobtainable by chemical synthesis. These classification approaches were still informed by the natural sciences' categorical branching hierarchy paradigm and would be thought of as cladistic taxonomies as the interrelation of objects is directly associated with their incidence, derivation and provenance.
Hertzprung and Russell employed empirical data from astronomical surveys in the early 20th century and found that stars could be grouped into families based on their surface temperature and luminosity, affording insight into their probable future fates. By studying the evolution of thermodynamic, nucleosynthetic and photophysical characteristics of stellar objects through these clusters, the Hertzsprung-Russell phenetic taxonomy has over time been refined, simplified and developed further into a highly successful visual classification mechanism with an example shown in Figure 3 [83, 84].
The so-called periodic table of the chemical elements that exists in the present day is an evolution of taxonomic approaches initially developed phenomenologically then refined with increasingly meaningful heuristics as scientific knowledge developed from the 17th century to the present. Newton studied elemental properties under the auspices of alchemy when, as Master of the British Royal Mint he ostensibly attempted to systematise approaches to reverse-engineer gold, at that time the most unforgeably scarce material known [85]. Geoffroy later developed symbolic matrices empirically studying affinities between materials [86]. Lavoisier and Priestley are credited for the discovery of elemental oxygen and its role in combustion, invalidating incumbent phlogiston theory and the long-lived mythic notion of an element of fire [87, 88]. Döbereiner attempted to group materials based on elemental mass triads in the 1820s, after Dalton's work lent credence to Democritus' atomic theory as depicted in Figure 4 [89]. In the mid-19th century theories developed to bridge the gap between empirical pattern-finding and axiomatic classification of elements on the basis of atomic mass and number, with varying approaches developed in isolation by de Chancourtois, Newlands, Meyer and Mosey [90, 91, 92, 93].
Elemental taxonomy progress is well documented and disseminated widely as each physical instantiation of the periodic table constitutes a snapshot in time given that heavy elements continue to be discovered and advances in scientific theory further the progress from empirical to axiomatic bases for this taxonomy approach. As with Kekulé's elucidation of the aromatic cyclical structure of the benzene molecular, Mendeleev was thought to have made the necessary deductive leaps in an ouroborosian dreamtime reverie, perceiving a rotary concept to be the key inventive step towards a unified chemical ontology [95, 96].
In the late 19th and early 20th century linear (Figure 5) and cyclical (Figure 6) schemas both developed as classification discriminant improved from ranking of elemental oxides to atomic number and outer shell electron configuration as determined by permutations of quantum numbers. Circular designs such as Soddy's have a greater conceptual and explanatory power than linear ones by dispensing with the need to choose a position for the empty-shelled noble gases [97]. Moving across the table, outer electron orbital shells are populated according to thermodynamic principles with quantum mechanical orbital theory providing the geometries and energetic characteristics of s, p, d and f-type orbital probability density functions (PDFs) [98].
In the present day, elemental discoveries continue and systematised classification frameworks exist to explain, predict, observe and categorise. Periodic tables constitute mature taxonomy approaches predominantly employing axiomatic reasoning and empirical validation. Indeed this taxonomy format has itself become a memetic simulacrum, representing and signalling the triumph of scientific traditions, though critical debate as to its absolute veracity continues [100, 101, 102]. The symbolic meaning of form may surpass that of contents as “periodic tables” of unrelated objects proliferate, with perhaps the most egregious misuse of this term and of taxonomy itself to date discussed below in Section 2.4. As with all information systems, the principle of GIGO (Garbage In, Garbage Out) applies [103].
In contrast to the above historical successes of taxonomy, classification approaches to legacy and cryptographic assets have been rather limited in scope and depth to date. Both transnational banking institutions such as the \textit{Bank for International Settlements} (BIS), \textit{International Monetary Fund} (IMF) and credible commentators are largely yet to progress beyond somewhat naïve classification methods, which provide little explanatory power or exhaustiveness of classification [107, 108]. In the current era of regulatory inconsistency and opacity, a more logical and robust conceptual framework using more considered classification approaches would allow existing taxonomic, scoring or rating philosophies to be integrated into a more versatile conceptual framework.
Brave New Coin and CryptoCompare are the sources of the most thorough characterisations of cryptographic assets to date, building upon some of the categorisations and nomenclature employed by Greer, Burniske and Tatar [26, 110] with illustrations in Figures 7 and 8. In the Burniske / Tatar approach, cryptoassets are considered to be classified sufficiently by two sets of binomial categorisation: 1) “value-protocol use or not” and 2) “direct value or not”, “functional value or monetary”. Some unique characteristics of the various attributes were identified but ultimately this framework approach largely fails to withstand scrutiny of the Nickerson et al. requirements of a valid taxonomy regarding the lack of exhaustiveness, possessing multiple overlapping attributes and being descriptive rather than explanatory (see Sections 2.1, 2.2 and 3.2) [66]. Some of the classification approaches contained within both studies would map reasonably well onto the Nickerson et al. construction of a taxonomy problem statement and canonical requirements (see Section 3.1), though they would be better described as databases or information repositories of relevant metrics and attributes.
Other self-proclaimed cryptoasset classification attempts fare less well when judged by these criteria. Of particular note is the “taxonomy” produced under the auspices of a “periodic table of cryptocurrencies” which rather resembles an arbitrary scatterplot with a polynomial fitting line “connecting” the axes of risk and reward (Figure 9). Such a simple intuitive “clustering” of cryptographic asset types - if indeed the “data” presented is genuine and has been legitimately been examined using pseudo-taxonomic approaches - appears to be absolutely devoid of explanatory power [106]. A number of largely trivial Venn type sorting approaches have been attempted by Bech & Garratt and others for both public and private legacy moneys. For example the so-called BIS Money flower [108, 109] fails as a useful taxonomy in the Nickerson sense - being inexhaustive, possessing multiple overlapping attributes, and being primarily descriptive rather than explanatory.
Linear, hierarchical and uni / bivariate sorting approaches including lists, scoring systems and typologies have been employed with examples such as the SpacesuitX ICO scoring system and Swiss regulator FINMA with its one-dimensional “utility, payment, asset, hybrid” delineation [111]. CryptoCompare commented in October 2018 that by FINMA's rationale, over 50% of major cryptographic assets would be classified as securities by Swiss law [105]. In addition to many informal self-proclaimed taxonomies, the word does seem to be used loosely enough to be applied to simple lists of phenomena or objects or even a polynomial fitting function overlaid upon ostensibly arbitrarily placed data with some creative license [106].
The most complete example to date of a strictly valid cryptographic asset taxonomy is from a recent conference proceedings article by Fridgen et al. entitled Don't Slip on the ICO and this study employed Nickerson methodology to arrive at a reasonably useful taxonomy with cluster analysis performed on their desktop research and expert judgement derived dataset. Figures 11 and 12 contain key findings from this publication [112]. This work was presented at an EJIS conference in 2018, suggesting that the information systems research domain is leading the way with robust application of taxonomy design in the domain of cryptographic assets, rather than attempts arising from within the nascent cryptocurrency research community itself. Worthy mentions also go to Glaser et al. and Tasca et al. for producing meaningful classificatory work in the related areas of decentralised consensus mechanisms and blockchain technologies respectively [113, 114].