Today is certainly a historical moment of social tension and I view an important role of our Company as defending free expression. Now this has never been absolute and of course we take our responsibility to prevent harm very seriously too. I think we invest more in getting harmful content off our services than any other company in the world. Those who follow us closely know that we have more than 35,000 people working on safety and security, and that our budget for this work is billions of dollars a year, more than the whole revenue of our Company at the time of our IPO earlier this decade. And we’re going to keep on investing more here.
So over time we may not have to ramp it as much, but I don’t foresee any time in the near future that AI is going to make it to that if the cost comes down. In general what we have to do is we use computers and AI for what they’re good for which is looking at a lot of content very quickly and making quick judgments. And we have teams of people doing what people are good for which is making nuanced human judgments. And so you build the computer system, in that way they can flag and get rid of some of the worst stuff, and so they can flag for human review, some of the stuff that’s borderline and then there’s just so much content flowing through the system that we do need a lot of people looking at this. And I don’t think that’s going to change anytime soon.