On Big Data (small data) and (trans)national sources
Komplexitätssteigerung on the hermeneutics of Big Data
A manifesto for the small in Big Data and Digital Humanities – in questions (mainly)
Why do Data have to big Big?
What is the benefit of being big?
And when does big become big?
Where does big data start?
Can small data be beautiful?
How to get the best out of both worlds – big and small?
Who owns Big Data (intellectually)?
Is “cleaning” data an act of interpretation and hermeneutics?
Would it make sense to replace big by complex?
Is big data an elegant way of shying away from complexity?
Is big data flat – loosing edges, complexities
How to bring in our strenghts (historians, that is) trained as close readers into big data (kritische Haltung)?
Where is the competence of historians – same competences but bring them from analogue to digital?
You have the data – but we have the questions, ok?
Please let us in – or are we (historians) just the guys who annoy because we ask questions and seek Komplexitätssteigerung?
A problem of sources in transnational histories
A high number of our sources are bound to national institutions (archives, national libraries). Could the maps and visualisations we have in mind allow us to de-centre or de-nationalise the nature of sources? A great example is the newspaper collection of the Austrian National Library in Vienna. However, as long as these collections remain bound to either a state (Habsburg mainly or a single language) we cannot de-centre or de-nationalise the objects of our studies. The visualisastion and mapping of events along such newspapers would only reinforce the existing dominance of national frameworks.
Anna Annieva, Bernhard Struck, Martin Stark, Stefan Nygard