Instagram has shared new details on how its app uses machine learning to surface content for users, stressing that, when making recommendations, it focuses on finding accounts it thinks people will enjoy, rather than individual posts.

While Instagram has not been criticized with the same ferocity as YouTube (dubbed “the Great Radicalizer” by The New York Times), it certainly has its share of problems. Hateful content and misinformation thrive on the platform as much as any other social network, and certain mechanisms in the app (like its suggested follows feature) have been shown to push users toward extreme viewpoints for topics like anti-vaccination.

In its blog post, though, Instagram’s engineers explain the operation of the Explore tab while steering clear of thorny political issues. “This is the first time we’re going into heavy detail on the foundational building blocks that help us provide personalized content at scale,” Instagram software engineer Ivan Medvedev told The Verge over email. (You can read about how Instagram organizes content on the main feed in this story from last year.)

The post emphasizes that Instagram is huge, and the content it contains is extremely varied, “With topics varying from Arabic calligraphy to model trains to slime.” This presents a challenge for startup content, which Instagram overcomes by focusing not on what posts users might like to see, but on what accounts might interest them instead.

Instagram identifies accounts that are similar to one another by adapting a common machine learning method known as “Word embedding.” Word embedding systems study the order in which words appear in text to measure how related they are. So, for example, A HUC99 word embedding system would note that the word “fire” often appears next to the words “alarm” and “truck,” but Less frequently next to the words “pelican” or “sandwich.” Instagram uses a similar process to determine how related any two accounts are to one another.