The Machine Intelligence Research Institute is trying to prevent extinction from artificial intelligence; they were notably early to this, starting their focus on extinction in 2005, and being upstream of much of the foundations of the field and raising it to the wider attention of EA and more recently the world.
Currently, they have shifted focus towards broad public outreach (https://intelligence.org/2023/10/10/announcing-miris-new-ceo-and-leadership-team/), from previous focus on technical research, though they are continuing with technical research. Their new focus on outreach is in light of the increased public and political attention given to AI extinction.
I think that MIRI has good models about AI, and when they communicate will honestly describe the risks as they see them and what measures would need to be taken; from my perspective, there is a good chance the policy made is politically expedient and insufficient, and MIRI at least does not pull punches when saying what they think would be necessary. I also think that because of these models, while they might have trouble finding things to fund, insofar as they are funding them they are spending that money in a way that's hard for others to beat in expectation.
There is disagreement in the alignment community about whether MIRI are correct that hasn't went away despite excellent discussion by participants with a range of views on extinction and how and why it happens (https://intelligence.org/late-2021-miri-conversations/), which is one reason why you might not find them promising, though do I think they are the place for doing work that is good by the MIRI worldview.
Tetra is a niche twitter microinfluencer by night and software engineer by day. Their greatest achievements are inventing the name "notkilleveryoneism" and drawing the original shoggoth meme. They arrived at effective altruism through rationalism and LessWrong, which in turn they came at through many different angles and online communities that all pointed in some way towards it; when they did arrive there, they found the principles of EA to be very appealing and the people of Oxford EA to be a nice group.