To introduce the project, we'll explore some potential goals of reliable rationality routines, obstacles to those goals, and plans of early steps to close the actual-ideal gaps.
Reliable Rationality Routines
The idea of reliable rationality routines captures a big part of the direction of this project. Here is what I mean by it:
Reliable ≈ Used & Effective
≈
or
, with help of
, to more likely actually apply rationality informed methods
at well-suited opportunities.
≈ Applying high expected value processes
(the approximately best alternatives that communities like
LessWrong
,
Effective Altruism
, science
and expert fieldworkers
have explored so far).
This involves using well-designed instruments that extend human capability.
This also involves balancing psychological short-term needs to fuel reliable continued growth of long-term value for oneself and others.
Rationality =
LessWrong definition of Rationality
It is common for people to have different definitions of the word rationality, as Julia Galef describes in her talk on
.
Which definition of rationality is most common or right isn't the point, the point is that we're communicating about the same thing.
It's beyond the scope of this article to argue for the value of this definition of rationality.
If you're skeptical, try exploring the definition on LessWrong for just a minute. It enriched my life.
Rationality informed methods to gain a clearer understanding of the world and to improve prioritization in your decisions to actually create more value in the world (possibly including both personal value and altruistic value).
Routines ≈ Practice / Maintain and Perform
, sometimes in solitude, sometimes shared.
≈ Learn effectively through
-drills on specific skill gaps, with feedback.
≈ Update yourself and your environments (habits, methods, instruments, collaborations, aspirations, systems, etc) to continue to have an actual high level of rationality expertise
.
≈ Apply skills in valuable projects (self-improvement & world-improvement) through
.
≈ Take care of what is bugging your mind directly ( turbocharge! )
, using methods through instruments at well-suited opportunities in-context of your work and life projects, not only during one-time events such as courses or workshops.
≈ Transparancy and openness in showing processes and systems one actually uses as well as aspirations for more ideal systems.
Main purposes of this: (1) To receive support to cope with reality, (2) receive feedback to effectively update and (3) to share less lossy process information for the community of aspiring world improvers to update their processes from.
"Those who work with the door shut"
...
"don't know what to work on, they're not connected with reality"
...
"I cannot prove to you whether the open door causes the open mind or the open mind causes the open door"
...
"The guys with their doors closed were often very well able, very gifted, but they seemed to work always on slightly the wrong problem".
—
, an observation of a correlation in
Challenges of growing and maintaining reliable rationality routine expertise.
A sample of the challenges
• In knowledge work there exists a lot of possible methods to use and lots of patterns
(
)
for where the methods could be applied. Reliably noticing patterns and recalling methods is hard in all of the noise.

How models transform data into wisdom
The image comes from the book
.
I added the scribbles on the right. Improving models also includes creativity, but the most difficult creativity (not necessarily innovation) might be to understand how to combine knowledge to properly take action/design for different situations.
I also added the diver, for fun and to refer to the metaphor of swimming in a sea of uncertainty from the book
Transcend
.

• It is hard to effectively choose and maintain your knack stack (referring to the process of choosing a
in software development, but for choosing which methods and beliefs you let
on the limited space of attention in your mental processes).
Even if we use the best available
, we might not use among the best available methods while using the tools, even if we've read about the methods at some point.
, as a more concrete alternative framing to knack stack.

Reliable motivation calibrated to truth is an emergent property of well grown and maintained mental systems.
These systems are invisible and thus very hard to find clearer understanding of without spending a lot of time studying the cognitive science.
People are left on their own to figure out their mental needs are and how to effectively fulfill them.
To help people more reliably have a truth/accuracy-seeking mindset (
) we need to provide design that helps them in real-time model and understand how to balance mental resources, e.g emotions of comfort and self-esteem and social needs of belonging.
• Gain fruits from and contribute to make a world in post-scarcity of effective
rational compassion support
.
To maintain and grow ourselves and the world we need both well designed challenge & well designed support.
Some top performers are really skilled at rational compassion support, but they have limited time. How do we create a scalable system where all aspiring effective altruists receive high-quality rational compassion support?
• To
the work process (not just work outputs) of Effective Altruism / rationality experts is hard since that information is a scarce resource for a large part of the community that doesn't fit in the loop.
Text and podcasts that only use natural language lossily compress the deep models of experts and often are not about process as much as describing topics.
Improving the world isn't a competition, yet there is little design helping us efficiently spend a sample of our time to share our wisdom in nearer-lossless formats so people can learn from each others' processes.
• Doing effective
for knowledge work to keep our routines more reliably effective.
• People are by nature biased toward new information that is low effort to take in. Practicing to apply the applied rationality basics well is also important but effortful and thus less likely to occur continously without improved design.
Rationality verbs, some more reinforced by design than others
Inspiration for the concept of rationality verbs came from game designers who discuss which verbs players play with as they interact with a game design, e.g. "press the A-button to make Mario jump", "look at the
to orient and decide where to go next" or "put a card from your hand onto the table".
Similarly for rationality, to show & tell an effective rationality process we need verbs, behaviors/procedures an aspiring rationality expert does well to be considered more rational.
The point isn't that the verbs should be exclusive to rationality, merely a help to understand what kinds of activities a rationality expert actually usually uses.
As will be discussed more later, it matters which method you use to do the verbs.
If better alternatives were to come along which fulfill the same goals / constraints, then we should adapt. Verbs are important, but particular verbs can be factored or updated.
Some rationality verbs are more salient in the community than others:
Know of a better taxonomy/ontology where the verbs e.g. doesn't intersect as much or are on a different more useful level of abstraction?
Your feedback is welcome, so we can improve:
Forecasting / Predicting
Betting
Modeling
Exploring
Updating / Calibrating / Orienting
Aligning motivation to truth
Taking Action
Deciding / Prioritizing / Planning
Reasoning
Self-directed Behavior Change
Measuring
Reordering values/aspirations
Evaluating / Scouting / Testing / Reviewing
Expanding comfort-zones
Coping
Resting
Supporting / Reinforcing
Communicating / Explaining
Coordinating / Cooperating
Showing process
Serving / Helping
Resource Gathering / Maintenance / Spending
Creative problem solving
Designing / Systemizing
Engineering / Building / Making
Automating
Optimizing / Improving
Experimenting
Learning / Practicing / Doing Scholarship
Debiasing
Focusing
Observing / Noticing / Monitoring
Instrumenting (using tools)
Explore more rationality verbs...
In the last decade forecasting has been made more visible both in communication channels of communities of people trying to improve the world effectively (e.g.
Effective Altruism
,
LessWrong
)
but also made more visible in with affordances in tools like
.
I think there are more rationality verbs that could benefit of getting a similar treatment.
Design could be improved to help us balance our time spent on different verbs, to optimize (better approximate) what actions would
be most useful to us
.
Interfaces as reliable rationality routine reinforcements
"The structure of things-humans-want does not always match the structure of the real world, or the structure of how-other-humans-see-the-world. When structures don’t match, someone or something needs to serve as an
, translating between the two."
Nielsen and Carter concludes
that interfaces are important because "interface design means developing the fundamental primitives human beings think and create with", not just about "making things pretty or easy-to-use".
Good interfaces empower people to do behaviors in processes that enhance the value they can create.
A metaphor that many in the LessWrong community associates with rationality.
I added the comic bubbles.
"A tool converts what we can do into what we want to do."
This project has scouted the design space of reliable rationality routines in search for important design properties that could improve our instruments and the infrastructure around them.
Let's explore some aspects of the current design that could be improved.

Gap and improvement opportunity bugging us
A few limits of the status quo interfaces
One reason that knowledge of rationality verb systems is hard to share so that more people actually use them, is because we still mostly communicate with textual representation of knowledge, even if we have powerful computers capable of dynamic media,
as Bret Victor effectively explains.
"The wrong way to understand a system is to talk about it, to describe it, the right way to understand it is to get in there, model it and explore it, you can't do that in words. So what we have is that people are using these very old tools, people are explaining and convincing through reasoning and rhetoric, instead of the newer tools of evidence and explorable models."
It is understandable that natural language remains the status quo, people spend a lot of time learning to read and write, and natural language is expressive.
Learning to author in new formats takes time and effort, so it's easier to use textual/symbolic representations of knowledge as a
lowest common denominator
format.
Unfortunately a one-fits-all approach often results in far from optimal effectiveness for many rationality verbs.
Combining text with dynamic media opens a door to new capabilities.
Even with truly
, information isn't enough for behavior change and effective learning. When the prominent environment design we present to people is texts, they probably mostly read texts, not necessarily practice the methods of the texts reliably.
The people who do practice are very motivated and are ok with spending a lot of time and effort to reinvent the wheel of how to design a good practice routine.
Motivation and grit for hard but useful skill building is good, but creating systems/methods/knowledge to make it easier is also important.
This provides a better starting point from which people can take the role of a scientist/scout to optimize from, to then later share what their improvements back to the pool of human knowledge.
To learn and maintain effective processes we need to understand how we could improve by getting feedback.
One way is to read a lot and then try to recall the knowledge in the right situation to test ourselves against those models.
This is hard, especially for novices.
Still, for novices and experts alike, to effectively grow and maintain our skills and our projects we need to
what we're doing.
Dynamic media to see, explore and deeply understand is not just empowering as
training wheels, as Bret Victor explains.
The scientific paper template joke
Modeling tools aren't only for sporadic activities like modeling scientific paper communication, it can be for day-to-day decisions too. Even if we don't usually need full freedom and power, it's important to be able to take the escape hatch when we need it.
A few similar framings of this are
,
,
,
,
,
understanding-through-building
or
"jump in and figure it out"
.
Many people could benefit from
in the form of effective instruments and environments for modeling, understanding and creating.
The existance of good shortcut design would still be compatible with researchers continuing to push the boundaries with the vast power and freedom of mathematics, systems programming and natural language.
There has been attempts to create more software for procedural learning of rationality verbs, e.g.
. Yet the design capabilities of the
guided track authoring tool
is still a small subset of the potential of the
.
courses are structured like forms or
which do have some strengths.
The format has served Clearer Thinking well in making a lot of mini-courses / method guides on useful topics, but we also need more powerful tools to further push the boundaries of capability empowerment, expressiveness for diversity, enjoyability
and more interface virtues.
These opportunities for improvement will be discussed more below.
(optionally:
).

What could a dynamic medium look like?
There are a lot of additional limits on the status quo, for more depth explore e.g.
, Andy Matuschak's research on
and
.
A few early steps toward a vision of reliable rationality routine interfaces
As a way to explore this design space I've partly taken the approach of
(alternatively framed as:
,
,
,
or
"jump in and figure it out"
)
.
The prototype doesn't fulfill all the identified virtues of reliable rationality routines, but has helped make my hypotheses clearer by making a concrete visible design to consider how to fulfill those virtues for.
A later section will show early sketches of additional design ideas that could support additional virtues.
With Instrumentally you take care of what is bugging your mind, during work and life projects.
You model simple but useful models of what mental needs you currently have. Then you do rationality informed methods to balance your mind toward more long-term reliable mental states.
You launch rationality informed methods to inspire your moves and to launch powerful instruments to aid you as you explore and problem solve in your work and life projects.
To empower your problem solving you use multiple powerful tools on your computer together, instead of settling for a lowest common denominator
.
You invite allies and take turns to reinforce each other as you challenge yourselves to grow yourselves and the world for the long-term.
Your feedback is welcome, so we can improve:
Scope & Expectation Requests
In the spirit of the aspiring Effective Altruism community, which mostly values openness, transparancy and honesty, I try to be clear about my intentions and uncertainties.
Switch to more nuance version
I struggle to improve the reliability of my rationality routines, even though I'm very motivated to improve.
With compassion I see other aspiring Effective Altruists and LessWrong rationalists in my circles struggling with reliable rationality routines too. My circles are of course not necessarily representative of the whole community, but might be to some extent.
During my CFAR workshop attendance the idea of design was presented, how important it is to not rely on self-control. Yet little design to help us maintain reliable rationality routines is accessible for most people who are not in the loop.
So I want to figure out how to make more accessible systems to help more people improve their rationality routines, wherever they live, however far they've advanced their career and at their prefered pace.
Importantly, I view the improvement of reliable rationality routines as a community project, I'm just proposing some directions we might want to include to complement existing approaches.
I could write papers to explore and communicate theory on these problems.
This would have the benefit of being able to focus on approaches that could be better for the long-term.
Or I could design systems trying to improve the problems directly. These systems would probably be far from optimal, but they'd help people now and gather data of real-world use.
We need both approaches, and I wanted to do something in between with this project. Both providing value sooner and improving the design for the long-term.
If you notice any improvement opportunities of the article as you're exploring, I would truly love to know!
I've tried to take my responsibility of inviting feedback by reminding explorers throughout the article, like this:
Your feedback is welcome, so we can improve:
As a side project for a couple of years and then full-time for about 6 months, learning/research and iteration has been done on the project. Not all ideas discovered are included in this article for brevity and because of difficulty of knowing what to prioritize including that would be most valuable for a more general audience.
Since I'm partly doing the project as skill building, quality is lacking on certain aspects that I'm practicing to improve on. I believe getting feedback from an incrementally increasing circle of disagreeable givers with scout mindset is better than polishing something too much before I've gotten reassurance from the community judgment that it is worth optimizing.
I'm aware I'm trying to tackle a hard question, and that my skills aren't enough to explore it as well as I'd like. Although I'm learning a lot, potentially somebody more skilled could distill, scout arguments, mentor me to think clearer about the cause or, at least partially, take up the baton.
Although I've been fairly time and resource constrained, I've based as many assumptions as possible in science (mainly cognitive science, rationality and interface design), fieldwork experts in interface design and game design as well as reasoning from good judgement people in the communities Effective Altruism and LessWrong. I will have misunderstood things, but am motivated to more thoroughly scout whether the approach is worth more attention or not.
Scope / Things I wont do or didn't do yet
• Explain all of the most important models I've used to support the design of the current prototype. I long for a future of systems in which you can learn about its design decisions and the conceptual tools (theory) and evidence used to make a robust case for those decisions, in-context of the system itself. Design is a messy process and I aspire to be more organized and thorough in presenting all models I've used in the future. For now, please reach out if you're curious about a specific design decision or if you have any improvement opportunities bugging you! I've done my best to base decisions in interaction design, more solid cognitive science theory, rationality principles from
and
and from other expert fieldworkers such as e.g. Jonathan Blow and the people on the
.
• Test on many users, so far only acquaintances and a few interface professionals have evaluated the prototype and the vision of the project. Part of the motivation for this article is to get feedback and readjust direction.
• Ship/launch the Instrumentally app (so far it's only aspiring scholarshipping, not actually shipped).
• Design for a general public. So far the prototype focus on helping people who have been to a CFAR workshop or read the handbook thoroughly.
Show more possibilities that weren't prioritized yet
––––––––––––––––––––––––––––––––––––––>
Start by exploring the concrete design, then move toward how this aspires to fulfill big picture values.
Choose if your
of understanding abstractions is lower.
If not sure, choose this default path.
––––––––––––––––––––––––––––––––––––––>
Start by exploring the big picture value, then move toward more concrete ways to increase that value.
Choose if your perception of
of the project is lower.
Both walks cover the same information, but in different order based on your selected need.
Your feedback is welcome, so we can improve:
Here are some contribution idea prompts:
• Provide recommendations
. Do you know of better articles about the same topic, or related articles that could enrich the robustness of this article?
• Suggestion/referrals to people who think continuing this research would be valuable.
It is stressful to do the independent research without security of any income. Odds are that the research would make progress faster if I had more security.
• Do you think you or somebody you know could fit as a co-founder, please contact me.
My strength is in research and design, not in managing companies.
• Can you think of a goal factor (CFAR technique) of the design presented in this article that is superior?
• Using knowledge from cognitive science, rationality, human-computer interaction and other sciences, model better design alternatives that score even higher on the presented virtues for reliable rationality routines (and potentially other important virtues not presented). Build and share with the communities LessWrong and Effective Altruism!
• Do you think there is a feasible way that AI could help with the obstacles of reliable rationality routines?
•
Study Bret Victor's research
carefully and estimate how important you think it could be and share the results on LessWrong or Effective Altruism Forum.
• Distill and communicate the most important information of this article better than I can.
• Help me improve this communication model however you think is best, or use e.g. these feedback prompts:
Come to think of any strong evidence for or against specific claims?
Describe your cruxes with the model
How likely would you be to recommend this to a friend if the vision above came true?
Rate the article based on the
scoring and suggest possible ways to improve the article on the factors. Any certain parts that bring down the average a lot?
Make your own attempt at making a communication model with better RAIN Framework scores of what you consider the main points of this article
Your feedback is welcome, so we can improve: