[Coin World] Lemon Slice's recent financing move is significant - it has just completed a $10.5 million seed round, aiming to bring its diffusion model Lemon Slice-2 to market.
This model is quite interesting: it can generate interactive digital avatars from a single image, and with 20 billion parameters, it runs effectively on a single GPU, capable of stably outputting a video stream at 20 frames per second. There is no need to use a supercomputing cluster, which is a tangible advantage for reducing deployment costs.
The application scenarios are quite broad—customer service robots, virtual lecturers in educational settings, mental health support, and similar areas are all lucrative fields. Especially since it supports the generation of both human and non-human roles, this means the creative space has been opened up, allowing for both virtual images and 2D assistants to be created.
The investors behind it also have strong capabilities - Matrix Partners and Y Combinator have teamed up, and these institutions have always been spot on in their judgment of AI infrastructure and applications. The API interface design also indicates that they intend to pursue a platform approach, making it easier for developers to integrate.
The significance of this financing is not just a matter of money, but rather that the application of the diffusion model in practical productivity tools is accelerating its implementation.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Lemon Slice raised $10.5 million, what can a 20 billion parameter diffusion model do?
[Coin World] Lemon Slice's recent financing move is significant - it has just completed a $10.5 million seed round, aiming to bring its diffusion model Lemon Slice-2 to market.
This model is quite interesting: it can generate interactive digital avatars from a single image, and with 20 billion parameters, it runs effectively on a single GPU, capable of stably outputting a video stream at 20 frames per second. There is no need to use a supercomputing cluster, which is a tangible advantage for reducing deployment costs.
The application scenarios are quite broad—customer service robots, virtual lecturers in educational settings, mental health support, and similar areas are all lucrative fields. Especially since it supports the generation of both human and non-human roles, this means the creative space has been opened up, allowing for both virtual images and 2D assistants to be created.
The investors behind it also have strong capabilities - Matrix Partners and Y Combinator have teamed up, and these institutions have always been spot on in their judgment of AI infrastructure and applications. The API interface design also indicates that they intend to pursue a platform approach, making it easier for developers to integrate.
The significance of this financing is not just a matter of money, but rather that the application of the diffusion model in practical productivity tools is accelerating its implementation.