
TextStyleBrush proves that it’s possible to build AI systems that can learn to transfer text aesthetics with more flexibility and accuracy than what was possible before - using a one-word example.

Lower Barriers to the Study of Deepfake Text Example of TextStyleBrush replacing text on handwritten signs at a fruit stand. While this technology is research, it can power a variety of useful applications in the future, like translating text in images to different languages, creating personalized messaging and captions, and maybe one day facilitating real-world translation of street signs using AR. If AI researchers and practitioners can get ahead of adversaries in building this technology, we can learn to better detect this new style of deepfakes and build robust systems to combat them. Because of these complexities, it’s not possible to neatly segment text from its background, nor is it reasonable to create annotated examples for every possible appearance for the entire alphabet, as well as digits.īy openly publishing this research, we hope to spur additional research and dialogue preempting deepfake text attacks in the same way that we do with deepfake faces. It means understanding unlimited text styles for not just different typography and calligraphy, but also for different transformations, like rotations, curved text, and deformations that happen between paper and pen when handwriting background clutter and image noise. While most AI systems can do this for well-defined, specialized tasks, building an AI system that’s flexible enough to understand the nuances of both text in real-world scenes and handwriting is a much harder AI challenge. Now, we’ve built a system that can replace text both in scenes and handwriting - using only a single word example as input.

With this AI model, you can edit and replace text in images. We’re introducing TextStyleBrush, an AI research project that can copy the style of text in a photo using just a single word.
