- Published on
Joining Ought
- Authors
- Name
- James Brady
- @james_elicit
In mid-December, I left Spring, and about a month later I joined Ought as their head of engineering.
It was a momentous change to a leave a job, a company, and my team after so long, but there are so many aspects to Ought that I'm incredibly excited for that in the end it was a move I had to make.
Here are some of the reasons it was not only easy but essential for me to join Ought. If any of these resonante with you: yes, we are hiring 😉.
1: I wanted a new challenge, and Artifical Intelligence is exactly that
After nearly seven years at Spring, I felt my learning curve was starting to flatten off. There were always new challenges and opportunities, but fundamentally they were of a similar shape to those I'd seen the last month or last year.
This is a great sign that it's time to move on: we do our best work when we're learning new things at the same time as applying old tricks.
Ought is using the most exciting current AI technology (large language models) to tackle incredibly hard problems – problems which even most humans would struggle with. This is a whole new toolchain for me, and a valuable new skill for me to learn.
Rather than taking a year or two to do a master's degree of questionable utility, I can continue working while getting hands-on experience with the latest AI models.
2: Artificial Intelligence is going to be a big deal
The first ultra-intelligent machine is the last invention that man need ever make
Since the earliest days of AI research, philosophers and computer scientists alike have realised that thinking machines with capabilities equivalent to humans would be a truly transformative technology.
These machines would be able to accelerate scientific research, liberate us from drudgery, cure diseases, blow through economic constraints, create and curate beauty, explore the galaxy, and order replacement printer ink cartridges for us.
When this will all happen is still very unclear. However, most experts would agree that transformative AI will exist by 21001. Way before then, we will see AI-driven revolutions on a per-business, per-sector, per-country basis. Massive change is on the horizon and rapidly approaching.
It seems a very smart career move to jump on the AI wave at this early stage.
3: Artificial Intelligence carries serious risks
Unfortunately, I only gave half of I.J. Good's quotation before. Here's the full version:
The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
And Good wasn't alone in recognising the potential for catastrophe with AI:
If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.
It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.
And it's not just mid-20th century hand-wringing. Over the last few years, there has been an explosion in awareness about the risks posed by AI:
- Several excellent popular books: Superintelligence, The Alignment Problem, Human Compatible, Life 3.0, …
- The BBC's 2021 Reith Lectures were given by Stuart Russell on the topic "Living with Artificial Intelligence".
- New organisations and safety teams are springing up: Google's DeepMind, OpenAI, Anthropic, Redwood Research, and of course my very own Ought.
However, it was Toby Ord's The Precipice which crystallised the urgency of the problem for me, and galvanised me into action.
In this book, he estimates that there is a 10% chance that within the next 100 years, out-of-control AI will render humanity extinct.
Some are more optimistic than this; some are more pessimistic. However, even if Toby's estimate is orders of magnitude off, it's clear that if we want to realise the benefits of AI listed above, we need people thinking very carefully about how to do that safely.
That's exactly what we're doing at Ought. We're building AI systems in such a way that they're intrinsically safer than the capability-at-all-costs approach seen at some other organisations.
4: Ought is part research, part product-building, and part Effective Altruism
On a lighter note, the team and wider community around Ought are a lovable bag of oddballs. Incredibly smart, ardent believers in making the world a better place, warm, dedicated, and welcoming.
They fit under the Effective Altruism umbrella – a community I've enjoyed spending more time in over the last couple of years. We've hired several excellent teammates from this pool, and have benefitted from grants from several EA funds like the Long-Term Future Fund, the Future of Life Institute, and Open Philanthropy.
We've made contributions to AI research – notably our RAFT benchmark – but at its core, Ought is a product-building organisation. The field of AI safety has thankfully reached a level of maturity where some blue-sky research ideas are ready to be realised, and that is exactly what we're doing with Elicit.
It's often where two fields or approaches collide that something truly magical happens: it's exciting to be able to take the best ideas from the literature, and watch them germinate and grow in the fact-paced, user-centric startup environment I'm used to.
If you're interested to find out more about what we're working on and where we're heading, please get in touch! You can also see the roles we're hiring for on our careers page.
Footnotes
There are lots of different timelines for transformative AI. The most thorough I've seen is Ajeya Cotra's Draft report on AI timelines (skip to the conclusion). ↩