visual summary:
A year ago, I left my job at a genetic engineering startup to attempt independent research. I wanted to forsake traditional metrics of success in science institutions and carve a path that allowed me to live in my values and pursue intrinsic curiosities.
Despite proclaiming myself independent, I very quickly realized that I need help, feedback, and community. So, I started a small Slack group of trusted peers and began posting little bits of my research plans and process. My experience significantly improved (and I started using the term extramural researcher, rather than independent). Having a low stakes audience to speak to, even if they only passively listened, motivated me, I got help troubleshooting experiments, I was sent high quality resources to refer to or read, I felt less lonely and more inspired.
In doing this, I realized the original purpose of science publishing: getting help and feedback and finding community, which was so starkly simple in comparison to the motivations, layers upon layers of bureaucracy, expenses and corruption in current publishing systems.
I became intrigued if other researchers felt the same. After leading a few discussions with variations on the theme of "First Principles of Science Publishing", I expanded my list of basic needs (of a researcher) that publishing fulfills (in order of intrinsic to extrinsic):
to share knowledge (and change the world)
to find and be in community
to get help and feedback
to feel complete and celebrate
to be seen and acknowledged
to receive the resources needed to continue to do research
No single tool succeeds at fulfilling all these needs. We must stitch together patches of each tool to create a satisfactory quilt that keeps our science dialogue cozy.
I've distilled these six specific needs into three general dimensions that tools can address:
How easy is it to access knowledge?
How easy is it to share knowledge?
Does it enable a rigorous discourse environment?
I visualize the answers on a map like this:
I've applied some hand-wavey metrics, but these should be seen as anecdotal, n of 1, data. These are just examples of why a tool might qualify for the score, but its not hard and fast, this is a lossy attempt to quantify a subjective assessment.
As scientists continue to rail against peer review and paywalls, and as DeSci, extramural researchers, and metascientists continue to emerge, I think iterations on our science publishing tools might benefit greatly if we contemplate such questions.
Below, you'll find some deeper characterizations of the tools and strategies on the above axes.
You can skip to the list if you don't feel the need for more context.
Accessibility is straightforward. The ideally accessible publishing tool would enable free reading of a lot of easily discoverable information. Levels of accessibility fall short with subscription costs, affiliation based access, or just a small amount of accessible content. The White House Office of Science and Technology Policy just recently announced a new guidance to make publications and research funded by taxpayers publicly accessible, without any cost, which could be a huge step towards accessibility.
Similar to accessibility, a publishing tool that excels in shareability would enable freely publishing to a large audience. Highly respected journals provide extensive reach to a publication, but falter when it comes to cost, often charging thousands of dollars to submit a paper for review. In the current paradigm of science publishing, authors ultimately pay to ensure reach and pay to claim their work. For better or for worse, innovations like preprints and micropublications take strides towards improving both shareability and claiming.
Preprints
Traditional journals also stumble in time costs; publishing can sometimes take months to go from submission, through review, then revisions, and finally to publication. In effort to combat time costs, some traditional journals have turned towards preprints, which operate in a publish first, review later paradigm. This does enable speed of shareability, but ultimately, I find preprints to be a band-aid on a bullet hole.
Preprints attempt to incentivize researchers to share their work faster. However, there are so many other technical and cultural innovations that hold higher potential for enabling researchers to share data faster, for example: legitimizing micropublications, contribution mapping that is robust, specific and nuanced, and self publishing.
Preprints are probably mostly used to claim units of science faster.
Publishing to Claim
Claiming science is an undeniable incentive for researchers to share their work. I do not mention it on the list of basic purposes of publishing because I find it to be pervasively imposed on scientists, and not aligned with our intrinsic motivators. Publishing to claim is a useful strategy to advance a scientist's career, or a lab's status, or assert authority in a field, but this tends to be damaging towards accessibility and progression of science. Other folks have eloquently written about this:
"Scientists are incentivized to, and often do, withhold as much information as possible about their innovations in their publications to maintain a monopoly over future innovations." - Milan Cvitkovic ³
They "...have cited several systemic and technical blockers that prevent them from sharing their research data. The most common concern lies with the fear of losing intellectual ownership and suffering a lack of attribution for work. Why share the expensive data that I have painstakingly collected for others to publish before I can?" - Kinshuk Kashyap ⁴
and so, "...the only way in which a discovery in science can be attributed to the scientists — and hence become property of the scientist — is by publicly making the finding available... ‘making it yours by giving it away’ ” - How Economics Shapes Science, Paula Stephan ⁵
Scientists are incentivized to claim a research output rather than share it. We need publishing systems that enable researchers to get nuanced credit attribution for sharing all the parts of a research output: raw data, variations on data interpretation, negative results, storytelling, pattern matching across datasets, research proposals, etc.
Micropublications
Micropublications are a speculative innovation that I have high hopes for. Rather than spending years gathering enough results and narration for a traditional science publication, imagine if scientists could publish single, small units of research, immediately, without fear of losing credit. For example, I could have the brilliant idea of producing paper from yeast (publish this idea as micropub), Ned from his mother's basement in New Jersey could do the math of comparing atmospheric carbon sequestration of old forests versus newly replanted trees (and link it to my pub), Amanda from University of Toronto could identify the necessary metabolic pathways for paper-like fibers and cellulose production (and publish this), Jay from the Joint BioEnergy Institute in Berkeley could do the molecular cloning work (and publish those DNA designs as a micropub), Iman could transform that DNA from Jay into yeast, and Jamie from the Mass Spec facility could to the metabolite analysis (and publish that data as a micropub). There are a lot more possibilities I (and others) imagine for micropubs, including gratitude grams, bounties, micro profits from patents, knowledge graphing, and more, but I will attempt to stay high level here:
Micropubs could foster collaboration not competition, because contributions are documented, researchers do not need to fear scooping, the researcher's contribution will be documented in a macropub, or living review, which could simply be a narrative threading of micropubs.
I also think a micropub based publishing paradigm would encourage negative results to be published because each publication is significantly lower stakes. Hypothetically, an experienced scientist might not need or want to see all the negative results of a study, she could filter all negative-result-micropubs and just view the successful data-building bits. However, an early career scientist trying to replicate a study would massively benefit from being able to access the negative results. This could also reduce unnecessary repetition of experiments.
I also imagine micropubs could create an appreciated niche for scientists who scan through many many micropubs across fields and pull out patterns or trends, then publish seminal macro pubs that string together those diverse micropubs. For example, Lynn Margulis (née Sagan) is now primarily credited and applauded for the theory of endosymbiosis, but at the time of publishing her pivotal work, "On the Origin of Mitosing Cells", she was largely berated and criticized for co-opting data and arguments from many other researchers, then taking sole credit for threading together their disparate work. I think a publishing paradigm based on micropubs would not only allow for easier pattern finding (because data is not buried into and only justified by narratives) for scientists like Margulis, but also allow for detailed credit attribution.
Most exciting for me personally, is that a micropub paradigm could create a means for extramural scientists to get peer review, feedback, community, collaboration, and maybe funding. I hope that micropub-based platforms outsource some of the work of a PI and institutions.
A discourse environment is the ecosystem in which researchers get help, feedback, and community. A high quality discourse environment might have fast help, quick turnarounds of feedback, depth in trouble shooting, and ease of iteration. A discourse environment sets up a scientist's creative context and helps maintain it through the turbulence of doing research. A discourse environment is not only people offering support in troubleshooting, but also: a citation system that refers you to another study, your slack channel called "Papers", it could include weekly lab meetings, Twitter, Reddit, Stack Overflow, anything that contributes to your scientific dialogue.
Archival Status and DOIs
Citations, and therefore DOIs, are currently used to track which publications contributed to a given publication. I consider this to be a form of discourse environment similar to how knowledge graphs or discussion forums are other forms of discourse environments.
Archival Status and DOIs are always brought up as a needed aspect of any science publishing system. The actual value of a DOI is that is the article is stamped as forever searchable and citable. Indefinite searchability, version control and citable version history are undoubtedly valuable to the discourse environment, despite enabling these useful features, DOIs have perpetuated papers as stagnant, unchanging, and incredibly high stakes entities. In my exploration of new publishing initiatives, I've found the question, "does it support DOIs?" can typically be heard as "Does it have this superficial stamp of legitimacy?" DOIs are both the default and the lowest effort attempt towards creating a rich discourse environment. We need better tools to access the value of version control, referencing, and archival status, while still enabling dynamicity.
Peer Review
Peer review was originally established as a means to give feedback, choose which papers got printed, and ensure those papers were accurate and high quality. It was a filter function. The Royal Society (which invented Peer Review) only wanted to print the best, most relevant papers to distribute. This need was amplified in the twentieth century when significantly more scientists and publications entered the scene, enabled by improved technology like the typewriter and photocopiers¹ ². Since this initial intention, peer review has become a stamp of legitimacy that is too often equated with truth, and the process itself has become corrupt in more ways than one. We are no longer limited or beholden to a centrally controlled journal that chooses (and prints) what we have access to: we have the internet. We need a better filter function.
Commenting
Commenting goes a long way in facilitating dialogue in that it is dialogue, whereas most science publications are formatted to exist only as impervious PDFs. However, comments and threads can quickly devolve into low quality or irrelevant spam. Many forum and dialogue sites have adopted peer review adjacent systems like up/downvoting for posts and comments. In a sense, traditional peer review is just soliciting diligent comments from a few reputable experts. I think a distributed peer review system that enriches for deep, technical, and rigorous comments might be the future of science publishing, but most commenting lacks a robust filter function. On this list, LessWrong has a particularly good toolkit (or culture?) for commenting.
Public/Private Environments
Researchers typically don't want to go from a completely private thought environment to a completely public environment. While private environments might initially seem contrary to the hope of sharing knowledge, I think tools that enable seamless transitions from private thinking space to public reading space actually enables a lot more sharing than distinctly low-stakes private environments or distinctly high-stakes public environments. PubPub (and institutional blogs like Arcadia's) are great examples of the benefit of private-to-public publishing.
Discourse Units
Discourse Units are a speculative innovation that go hand in hand with micropublications. A unit of discourse might be a question, piece of evidence, hypothesis, result, or protocol. Proponents of discourse units hope that they will enable more rigorous online dialogue because every "hypothesis" could be backed up with units of "evidence", and competing hypotheses could be evaluated with greater depth. On this list, only the nascent journal Octopus and forums like StackOverflow and Research Hub use discourse units.
of (science) publishing tools and initiatives.
This list is not your canonical set of science publishers, I include only a few traditional publishing tools to compare alongside platforms that I personally see my researcher friends using, like Twitter, Hacker News, SciHub, nascent journals, their own personal websites, and others.
You'll find a self-descriptor of each tool, a summary of my (sometimes pithy) assessment, and how the tool scored on each dimension.
"Nature is the foremost international weekly scientific journal in the world ... It publishes the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature publishes landmark papers, award winning news, leading comment and expert opinion on important, topical scientific news and events that enable readers to share the latest discoveries in science and evolve the discussion amongst the global scientific community. "
Nature is one of the most highly respected, traditional, scientific journals. It is the gold standard that innovators in publishing tools and formats will strive to surpass. Accessing knowledge hosted by Nature requires a $29.99/month subscription to online papers from 2017 onwards, or $199/year for printed journals (51 issues) and online access to papers from 1998 onwards. There also seems to be licensing deals for governments, universities, and corporations, but pricing details are not transparent.
Nature's reputation enables significant reach despite paywalls, lots of formatting requirements, authors can pay an Article Processing Charge of €9,500 to publish their work as Gold Open Access, which makes the article free to read¹.
As far as discourse environments go, Nature enables just the minimum: DOIs for archival status and papers that refer to other papers.
+2 lots of info to access: -1 expensive subscription: Access Total = 1
-1 expensive to publish, +1 free to publish but requires institutional affiliation, +3 reach: Sharing Total = 3
+1 DOIs, +2 peer review: Discourse Total = 3
"Sci-Hub is the most controversial project in modern science. The goal of Sci-Hub is to provide free and unrestricted access to all scientific knowledge."
"a shadow library website that provides free access to millions of research papers and books, without regard to copyright, by bypassing publishers' paywalls in various ways."
Sci Hub is the Robin Hood of Science Publishing. It is the most effective science publishing tool at making knowledge accessible. The Empress of Accessibility enables free access to over 88,000,000 research articles and books. Its existence enables all other tools to fail on innovating in that aspect. It fulfills one need and nothing else. Sci Hub may be the most beloved science publishing tool, because it fulfills this one basic purpose so distinctly well. Authors must publish in a journal elsewhere, there is no publish-straight-to-sci-hub pipeline, and it operates of DOIs, which might be the best justification for perpetuating the use of DOIs.
+2 lots of info to access, +3 free to access: Access Total = 5
Sharing Total = 0
Sharing Total = 0
"arXiv is a free distribution service and an open-access archive for 2,079,801 scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. Materials on this site are not peer-reviewed by arXiv."
Preprints, open access, and archival status, that's it. To Publish you must be a registered author to submit a paper. All authors are required to have institutional affiliation or an endorsement to prove belonging to the scientific community. There is also a moderation process to ensure scientific professionalism. Seemingly limited to institutionalized researchers. Dispenses DOIs, does not peer review the submitted papers.
+2 lots of info to access, +3 free to access: Access Total = 5
+1 preprints, +1 free to publish but requires institutional affiliation, +1 reach: Sharing Total = 3
+1 DOIs: Discourse Total = 1
"As part of eLife’s transition to a new 'publish first, then peer review' model, which we announced in December of last year, eLife is now only peer reviewing in depth articles that have been made available as a preprint. Our new editorial process combines the immediacy and openness of preprints with the scrutiny and curation of peer review to create a 'refereed preprint'."
eLife only peer reviews preprints in attempt to incentive researchers to share their research faster. As I mentioned earlier, I think preprints do not solve the scientists-are-slow-to-share-problem and actually propagates the publish-to-claim problem. eLife's Reviewer notes are publicly available to read which does enrich the science dialogue.
How easy is it to Access Knowledge? — free to read, with large amount of papers and preprints available
+2 lots of info to access, +3 free to access: Access Total = 5
How easy is it to Share Knowledge? — $3,000 publication fee, if invoiced to an individual, rather than a business, then VAT of 20% is added. Unlike other pricey subscriptions, eLife champions preprints, which earns them a point towards shareability.
-1 expensive to publish, +1 preprints, +1 reach: Sharing Total = 1
What kind of Discourse Environment is enabled? — supports DOIs, Reviewers Notes and an Editor's Evaluation of each paper is made publicly available. Filters for preprinted publications, which are later peer reviewed, and then editorially curated
+1 DOIs, +2 peer review: Discourse Total = 3
"PLOS is a nonprofit, Open Access publisher empowering researchers to accelerate progress in science and medicine by leading a transformation in research communication ... We propelled the movement for OA alternatives to subscription journals. We established the first multi-disciplinary publication inclusive of all research regardless of novelty or impact. And we demonstrated the importance of open data availability."
PLOS is the original open access journal and deserves applause for leading the open access publishing movement. With that said, publishing in PLOS can cost $800-$5,300 to publish, depending on specific PLOS Journal. PLOS does go to significant lengths to enables publishers from these countries, which are not charged a publication fee. Publishers from these countries are charged $500, but may apply for PLOS Publication Fee Assistance (some field specific Journals are free for these countries). The PLOS discourse environment is mostly just peer reviewed papers citing other papers, but they do have online commenting set up at the bottom of each pub, the commenting capability doesn't seem to be widely used.
+2 lots of info to access, +3 free to access: Access Total = 5
-1 expensive to publish, +1 reach: Sharing Total = 0
+1 DOIs, +1 commenting, +2 peer review: Discourse Total = 4
"Twitter is an open service that’s home to a world of diverse people, perspectives, ideas and information. We’re committed to protecting the health of the public conversation — and we take that commitment seriously."
Twitter is tricky to assess. It is not intended to be a tool for scientific publishing, yet it provides undeniable value to many researchers, which is why I find it worthy of a spot on this list. It often serves as a dumping ground for all the feelings and experiences that are suppressed in traditional publishing environments, which I think, offers valuable insight.
On one hand, Twitter enables so much discourse and connection between researchers in disparate walks of life (academia, industry, extramural) and allows them to engage outside of their local bubble. On the other hand, the actual tools for the discourse environment are quite lacking: limited to 280 characters and horrible threading and replying UIs. Commenting enables fast feedback and dialogue, user profiles enable engagement, it fosters community and scenius. Twitter has a very minimal (peer review adjacent) quality control and filter function for science dialogue, which is simply who you follow.
Similarly, it is incredibly easy to share small bits of knowledge or links, but requires investment to achieve significant reach. So much of my science community uses Twitter to discover new papers, people, even collaborators, and therefore warrants respect for enabling access.
Twitter is more of a thread, rather than a patch, in the quilt of our science dialogue: it connects many tools to many people, but alone would mostly be a tangled knot. You can see the scores below, but I keep the gram fuzzy because Twitter is too much of a "it-is-what-you-make-it" tool.
+2 lots of info to access, +3 free to access: Access Total = 5
+1 micropubs, +2 free to publish, +1 reach: Sharing Total = 4
+1 commenting, +1 public/private environments, +3 rigorous dialogue, -1 no archival status/just dialogue: Discourse Total = 4
"Designed to replace journals and papers as the place to establish priority and record your work in full detail, Octopus is free to use and publishes all kinds of scientific work, whether it is a hypothesis, a method, data, an analysis or a peer review."
In attempt to enrich online discourse environments, Octopus (in beta) stands tall on two innovations: micropublications and discourse units. Micropublications aim to foster more collaboration, faster publication, and unbundle all the parts of a traditional publication: Problem, Hypothesis, Protocol, Result, Analysis, Interpretation, Application, and Peer Review. These parts can be termed Discourse Units. Discourse Units may prove to enrich online dialogue by classifying discrete units of science then linking such units together to demonstrate robust evidence for a claim, or or pinning opposing hypotheses against each other, and so on. Micropubs and discourse units may also allow for more collaboration among disparate science communities, as they enable more robust contribution tracking. With that said, I find it disheartening that Octopus states the value of micropubs as establishing priority, which caters to a corrupted framework for science publishing.
Octopus is free to read, they used to have few publications, but currently seems like they only have demo pubs. It is also free to publish, but requires a (free) ORCHID iD. Octopus also plans to enable DOIs, but doesn't currently, and publicly solicits a form peer review, that is, rating each pub on "Well annotated", "Followed Protocol", and "Size of Dataset" on a five star basis.
+3 free to access: Access Total = 3
+2 free to publish, +2 flexible pub type: Sharing Total = 4
+1 commenting, +1 peer review, +1 filter function,+1 discourse units, -1 no archival status (yet): Discourse Total = 3
"microPublication.org publishes brief, novel findings, negative and/or reproduced results, and results which may lack a broader scientific narrative. Each article is peer reviewed and assigned a DOI."
microPublication.org is a traditional scientific journal in every way except that each pub is a small unit of research, a single data set with very little narrative. They champion and legitimize micropublications as a formidable innovation in publishing. However, despite winning points on ease sharing knowledge through micropubs, the cost to publish, and requirement for institutional affiliation negate their advances in shareability. It costs $250 to publish an article, publications require approval by a PI or funding body (aka institutional affiliation). uP doesn't innovate on discourse environments and just enable the default of DOIs and peer review.
See my thoughts on micropublications for more on this.
+1 lots of info to access, +3 free to access: Access Total = 4
-1 expensive to publish, +2 micropubs: Sharing Total = 1
+1 DOIs, +2 peer review: Discourse Total = 3
"PubPub is an open-source, hosted, free-to-use content management system designed to help knowledge communities of all types collaboratively create and share knowledge online."
PubPub is essentially a blog hosting platform built for scholarly-bent writers. It has an easy, intuitive, and aesthetic writer and reader experience and an advanced toolset for formatting. PubPub allows anyone to create a journal or personal website. I probably should be using PubPub for this site.
It is free to read, with a growing amount of papers or articles to browse. It is free to publish, but also has limited reach. PubPub supports public and private commenting, in-line commenting, DOIs, fosters topical communities, claims to support peer review processes, but unclear how, mostly has user-curated community sites (journals), limit of 10 free DOI registrations per community per year, then $1 per DOI via Crossref membership
+1 lots of info to access, +3 free to access: Access Total = 4
+2 flexible pub type, +2 free to publish: Sharing Total = 4
+1 DOIs, +1 commenting, +1 public/private environments, +1 peer review: Discourse Total = 4
"We are reimagining scientific publishing — sharing our work early and often, maximizing utility and reusability, and improving our science on the basis of public feedback."
Arcadia's experiment is unique in that they have built an institution that is committed to open science and believe that building a new science publishing system will better serve their pursuits over trying to change existing systems. Rather than submitting publications to an existing journal, Arcadia has created their own discourse environment that has an easy and aesthetic (though too often redundant) reader experience, publications persistently use first person active voice, they include detailed protocols, and support in-line public commenting. They have no peer review process, but rather an editorial staff. The Arcadians have vocalized (and acted on) intention to publish rigorous, small units of science, and do so quickly. The entire environment is on the platform PubPub. Each of Arcadia's "Pubs" are nested in the context of a "Project" which hopefully enables small pubs to elicit faster feedback and knowledge dissemination than a slower traditional publication. Arcadia essentially created a blog for their institution, but presented it with the fanfare and flare of a new, open science publishing experiment, which is great!
Arcadian articles are free to access, publishing is limited to Arcadian authors, PubPub (the supporting platform) is free for anyone to use, and anyone with a (free) PubPub account can comment.
Each pub has a DOI, and is open for public commenting: this is pretty awesome. The author of this pub, Peter S. Thuy-Boun, is having active conversations with readers. This is so starkly different from the static, status quo scientific paper. One step toward dynamicity! There is no peer review, but all pubs are from within Arcadia, and editorially curated by internal Arcadia staff.
+3 free to access: Access Total = 3
+2 flexible pub type, +1 free to publish, but requires Arcadian affiliation: Sharing Total = 3
+1 DOIs, +1 commenting, +1 public/private environments, +1 discourse units: Discourse Total = 4
"a journal and community dedicated to nurturing promising ideas ... and helping them blossom into scientific innovation. We have one primary criterion: does your article contain original ideas that have the potential to advance science? Peer review is conducted through voting and commenting by our diverse network of reviewers..."
Seeds of Science is a nascent journal that explicitly values speculative ideas and nontraditional science. Publications are called "Seeds" and are meant to be short (< 2,500 words) idea-driven writing. Significant creative license is encouraged. SoS is not really innovating on new tooling, rather it is a journal, that supports DOIs and is fiercely value driven. Specifically, SoS asks, "Can people outside of traditional academic science (or at the lower levels of it) make valuable contributions if given the proper platform and support?" and I think this question vocalizes values I would personally like to support, despite a lack of deeper technical innovation.
SoS is free to read, but currently hosts only a small amount of articles. It is free to publish, does not have significant reach, but gains sharability points because it has none of the usual formatting or length expectations of a traditional scientific paper. SoS enables the most basic discourse environment: DOIs and a peer review process:
A solicited group of reviews (aka Gardeners) vote yes or no on a submitted pub, both voting and commenting is voluntary. If a majority of reviewers vote yes, the pub is "almost certain" to be published, if around 50% yes, the strength of reviewer comments is subjectively assessed and a decision made, supposedly by the editorial team.
It is great to have value-driven journals, especially when the values cater to noncanonical research questions, however SoS defaults to stagnant PDFs and does very little to capture the power of the web. Nature at least has the excuse of being founded in 1869, I would hope that up-and-coming journals use the tools of the time a little more.
+3 free to access: Access Total = 3
+2 flexible pub type, +2 fee to publish: Sharing Total = 4
+1 DOIs, +1 peer review : Discourse Total = 2
"The web is a powerful medium to share new ways of thinking. Over the last few years we’ve seen many imaginative examples of such work. But traditional academic publishing remains focused on the PDF, which prevents this sort of communication."
Distill was a journal that focused on publishing clear, dynamic, and vivid machine learning publications. They aimed to combat the status quo of static publications. Each publication is visually stunning with expertly crafted interactive demos and reactive diagrams. After a four year long experiment in creating this journal, the distill.pub team came to the conclusion that the future of scientific knowledge dissemination will be (in most cases) self publishing. I highly recommend reading this Distill Hiatus post which explains this conclusion.
Distill is free to read, and free to publish. However, apparently each pub took such a significant amount of editorial work to get the pub up to Distill quality that editors eventually deserved authorship, and this became a conflict of interests. Despite championing interactive figures and harnessing the power of the web, Distill didn't seek to build extravagant dialogue tools; Distill issued DOIs and hosted peer review (and made the reviews public). Each pub did, however, have its own GitHub that readers could submit issues to.
+1 lots of info to access, +3 free to access: Access Total = 4
+2 fee to publish: Sharing Total = 2
+1 DOIs, +1 commenting, +2 peer review: Discourse Total = 4
"ResearchHub's mission is to accelerate the pace of scientific research. Our goal is to make a modern mobile and web application where people can collaborate on scientific research in a more efficient way, similar to what GitHub has done for software engineering. Researchers are able to upload articles (preprint or postprint) in PDF form, summarize the findings of the work in an attached wiki, and discuss the findings in a completely open and accessible forum dedicated solely to the relevant article."
I really like the vision ResearchHub hopes to build. Currently, ResearchHub appears to me as an extended version of SciHub (but users upload papers themselves) with more tooling for discourse environments and sharing. These tools include: commenting, text posts (micropubs), user profiles, a reputation system for users, and topical "hubs" that serve as living review papers. ResearchHub has a lot of the features I want in a publishing tool, I am excited to explore it more.
Posts are free to read, only papers published under a Creative Commons Attribution License can be fully uploaded. Notably, ResearchHub unexpectedly nearly maxes out all my axes, only faltering in shareability because a full publication must seemingly be published elsewhere. It is also free to post, and there are multiple types of posts: a paper upload, post, and comment.
DOIs for papers published elsewhere, commenting, user profiles and a reputation system for users enables a peer-review-like filter function for comments (this is cool!), also utilizes some minimal discourse units like "questions" vs "post" vs "paper". Bounties can be attached to questions and posts. I am also intrigued by ResearchHub's Meta-Study tool, which enables a user to "aggregate a collection of papers that support a particular scientific theory", see my thoughts on micropublications for more on this.
+2 lots of info to access, +3 free to access: Access Total = 5
+1 preprints, +2 flexible pub type, +2 fee to publish: Sharing Total = 5
+1 commenting, +1 reputation filter, +2 rigorous dialogue, +1 discourse units, -1 no archival status/just dialogue forum: Discourse Total = 4
"Stack Overflow helps people find the answers they need, when they need them. We're best known for our public Q&A platform that over 100 million people visit every month to ask questions, learn, and share technical knowledge."
StackOverflow is a question-answer forum for computerologists (software engineers, web developers, computer scientists, data analysts, etc). It is widely used and loved, there is a lot of info to access, it is free to access and posts questions and answers. Many attempts to create a stack for other fields of research have been created, with not much traction, unfortunately. The discourse environment is quite extensive, with discourse units, robust commenting, and an up/downvote based reputation system to enrich the reach and discoverability of good comments. It only loses points in that it is a short form dialogue tool with no discrete pub type for brand new knowledge.
+2 lots of info to access, +3 free to access: Access Total = 5
+2 free to publish, +1 reach: Sharing Total = 3
+1 commenting, +1 reputation filter, +2 rigorous dialogue, +1 discourse units, -1 no archival status/just dialogue forum: Discourse Total = 4
"Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity."
I'm including Hacker News on this list because the (extramural) computer scientists I know specifically write for the audience of Hacker News. This is interesting to me because I think Computer Science is the first of the sciences to champion decentralization, and maybe I can learn from the systems that evolved in this championship. My most obvious learning from Hacker News is that its got an interesting discourse environment. I've seen people scramble to eagerly respond and contribute to the dialogue before commenting closes on a post, and comments tend to be heavy with goodwill and depth of care. It is free and easy to access and share on Hacker News, which is essentially just a feed of links, but actual publications must be hosted elsewhere.
+1 lots of info to access, +3 free to access: Access Total = 4
+2 flexible pub type, +2 free to publish, +1 reach: Sharing Total = 5
+1 commenting, +1 reputation filter, +3 rigorous dialogue, -1 no archival status/just dialogue forum: Discourse Total = 4
I really wanted personal websites to look a lot better on this gram, but in the end, they usually lack reach, discoverability, can have high time and skill costs, some monetary cost, and the vast majority (including my own) don't have any dialogue capabilities or any peer review adjacent filter function. Despite the capabilities of the average personal website, there are some researchers who have crafted custom web-homes into beautiful beacons of decentralized science publishing and dialogue, unbeholden to any external publisher.
I am tempted to give personal websites a bonus point towards discourse environments because they can enable so much freedom and creativity in formatting and content, but I won't do that for now.
*A tool like PubPub can be used to make a personal website, and enables a lot more discourse functionality and ease of sharing.
+3 free to access: Access Total = 3
+2 flexible pub type, +1 free to publish: Access Total = 3
+1 public/private environment, -1 no archival status: Discourse Total = 0
"We are a community dedicated to improving our reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. More generally, we work to develop and practice the art of human rationality.[1]
To that end, LessWrong is a place to 1) develop and train rationality, and 2) apply one’s rationality to real-world problems.
LessWrong serves these purposes with its library of rationality writings, community discussion forum, open questions research platform, and community page for in-person events."
LessWrong is a community blog for rationalists, if you have as broad a definition for science as I do, it qualifies as a science publishing platform. I haven't personally used LessWrong extensively, but it seems to be really good one. It has: a reputation system, user profiles, robust commenting and threading, lots of articles, it is free to publish, free to read, has discrete pubs (unlike other forums) and there is a strong force of community behind it.
I am really curious if LessWrong is particularly good due to its community's cultural norms, or if the code could be duplicated and populated with a very different community that still reaps the infrastructure's benefits. Apparently, the Progress Forum and the EA Forum have done this, one of which has lively, ongoing discourse. Initially, I was nervous that LessWrong was the thing that maxed out all my axes, but the fact that two other organizations have duplicated the tooling behind LessWrong makes me think that this is, not wrong.
+2 lots of info to access, +3 free to access: Access Total = 5
+1 flexible pub type, +2 free to publish, +2 reach: Access Total = 5
+1 1 commenting, +2 reputation filter, +3 rigorous dialogue, -1 no archival status: Access Total = 5
Some of my initial questions were, why do we need journals? Why don't all researchers publish their work independently? But, after exploring publishing tools and researcher needs, I see that we benefit from the reach, filtering, and archiving that publishers provide.
The White House Office of Science and Technology issued a memorandum that will ban paywalls on federally funded research by January 2026. This gives me hope that access will be an extinct issue.
To address issues beoynd access, researchers have adopted tools beyond paywalled journals and stagnant PDFs, to supplement the deficiencies of publishing tools. For example, Twitter enables discussion and dialogue, enriched context, and allows sharing of a scientist's subjective experience, which is stifled and suppressed in traditional publishing, so much that writing in the first person, active voice is a bold move for science writers. Forums like StackOverflow or Hacker News also enable richer discourse environments, with focused intention.
Comparing discussion forums against curated collections of discrete publications highlights the value of filter functions.
In going through this list, I constantly found myself thinking that the quality of dialogue happening on Twitter is simply incomparable to the quality of dialogue that happens in reviewer notes or lab meetings where everyone has a focused and shared depth of knowledge and context. I often thought my intuition to seek open, online discourse environments that capture the value of systematic and institutional zeniths (like peer review and in-person lab meetings), is hopeless. Now, I've come to find that the qualities of these discourses are simply different, not necessarily higher or lower:
Zeniths: formal, attempt to be objective, remove the human experience, rigorous and deep.
Online dialogue environments: human-centric, subjective, open confusion, can be shallow and unbacked.
I think it is undeniable that researchers get extreme value from these more human-centric discourse environments, I think they actually end up being more honest than their "objective" counterparts. I would love a tool (or online culture?) that captures the positive value adds of the zeniths (rigor and depth) and the positive value adds of the dialogue tools (human-centricity).
If I were to do this again, I would do it as a survey and use the five axes shown below: