1 / 17
May 2023

Full disclaimer: I came up with this idea.

But I'm genuinely curious to hear your answer nonetheless.

I came up with an idea to create personal AI assistants to help artists create backgrounds faster for the stories they're working on with consistent styles.

I know most artists struggle with time and some will even trace over 3D Blender just to be able to survive the client's deadlines...

Anyhow, this is the website: https://magicmanga.co/16

What are your thoughts? Would you use this tool help you in workflow or no?

  • created

    May '23
  • last reply

    May '23
  • 16

    replies

  • 666

    views

  • 1

    user

  • 34

    likes

  • 4

    links


Can we get any info on what data was the ai trained on?
Also, the links in the upper bar of the website don't link to anything

There's nowhere near enough information for me to make a decision on whether I'd use this or not. There aren't even any screenshots of the sort of thing it produces compared to an artist's usual work, and the brief animation shown only shows what looks like a mockup using photos, not 3D models.

Besides which... tracing the blender models, even with the extra details I put in and the lighting I add and stuff, isn't the time consuming part. Sourcing all the models, or even sometimes building them from scratch, and placing them in the scene takes far longer. So I can't imagine then spending money to skip over the easy part that takes 30 mins after I just spent hours on the more difficult part.

No, for many reasons:
1) many fantasy and scifi comics, the setting is like a character in itself, so having AI make it will make a very generic lifeless backdrop with no real identity or originality to it.

2) many authors who have trouble with backgrounds have trouble because of lack of practice, if I used AI to do my backgrounds, my art would suffer in the long run and if AI gets totally banned (like it currently is on Tapas) I would have no recourse

3) Backgrounds aren't separate from characters. Characters interact with their environment all the time through props, stance, being partially hidden behind the foreground, even just doorways and everyday appliances and lighting. The background colour is going to severely change how the character's colours are percieved by the audience, so doing the characters and then having a machine generate a decor is going to be russian roulette in terms of colour palette, having the correct perspective, viewing angle, artstyle, object placement...

4) I personnally have a 3D blender file with one of the main settings of my comic in it from which I make base images to paintover. Thing is, I made that blender file, I built that setting in blender to use. I am using references of stuff imagined, created. I wanted to improve my art so I learnt a marketable skill to create my own stuff without sacrificing any of my identity nor any of my ownership of the work (which I wouldn't necessarily be able to do with AI because work that is not done by a human is not copyrightable, so I could be in legal hot water if someone were to steal my work because I would have to spend a good amount of time proving that the work is legally mine before any copyright dispute discussion could occur)

5) People use blender because creating a solid 3D space allows you to avoid things like continuity errors, elements of the scene changing place, shape, size, etc... between panels even when viewing the scene from vastly different angles.. AI is notorious for not being able to keep track of that sort of thing because it doesn't conceive of the space as a space, it's a flat image with pixels with this RGB signature in this spot.. Any AI tool that works comparably to current AI tech is going to be full of issues and more trouble than it's worth for most artists.

I hope this explains why this would generally be a bad idea, especially in terms of people who do blender paintovers, AI has essentially the opposite properties of Blender even independently of the qualms people have about how AI is trained and the ethics of the thing, it's just not a tool that has any advantages compared to the ones currently in use, even more so when you look at sequencial art that requires continuity of space.

Here are my thoughts,
I´m not sure if I will use it but I think the idea is good and I´m interested in checking out
if it works and if the linework looks convincing / consistent.
I think that assistants like that are part of the future of comic creation

Thank you for your response!

I've received a number of questions/comments on that point as well so I intent to write a blog posts in the coming future explaining how training works and why I feel this approach helps the artists retain copyright ownership in case of a legal claim.

I do struggle with backgrounds but I don’t trust AI software because of copyright reasons. Also the little preview image just looks like someone Googled random images and put a filter over it. If it’s just a Photoshop filter, just call it that. People have been using filters for decades in comics.

I think something that would be far more helpful for artists is affordable and easy to use CAD software. Or a modern day version of Bryce. You won’t have to rely just on photos, you could actually build your own BG.

First, thank you for feedback!

What you're saying makes a lot of sense. I initially thought that if you had something like this, you could circumvent needing to use 3D models at all. In the animation I did photo-bashing because I can't draw to save my life :sweat_smile: but I imagined that an artist would "draw the rough outline of what they want" and go from there as opposed to taking a 3D into the assistant.

Maybe that wouldn't make sense though? Let me ask you this. Is the value for you to making a 3D model from scratch the:

  • Fidelity to the real-world representation of that object?
  • Perspective viewing angle that keeps your object from looking "weird" ?
  • Or something else entirely

I'm not sure if I'm the target for this or not. I do use blender, but I don't trace anything. The 3d renders are the panels, in a very small number of cases I make slight edits to the renders in a drawing program, to fix some minor things, but for the most part what comes out of blender is the panel. Now I have seen some image generators do a sort of decent job superficially mimicking the look of 3d renders, so maybe this tool could still generate something in my style but I think there's a number of issues.

Actually I think most of my main issues have already been mentioned in other posts, so there's no reason to write wall of text about it. But basically the big ones are that the lighting on the characters and the environment should match, and the scene should be consistent from different camera angles. I feel like both of those would be difficult pull off with this kind of tool.

First of, thank you for your response! I appreciate you taking the time out to write this out.

1) That's a fair point. Would you still think that would be the case if you have sufficient previous assets? What I mean is all fantasy stories ultimately use the same building blocks but with their own twist to them: nature landscape and cities. If you exposed your AI assistant to enough images of the Gorgon Fortress1, then at some point wouldn't it figure that this a medieval setting and generate a "decent enough looking" view of similar city?

2) This is true and there is real platform risk with being 100% reliant on a tool that could be banned. This causes me to ask a question though. Photoshop for example has been using AI for +10 years to make layer filters possible or even object selection. If you use these tools to assist you in your workflow, would you consider your final work to be AI-made, or AI-assisted? What would need to be different for the idea I'm proposing to fall under the umbrella "AI-assisted" ?

I'm even considering purposefully crippling the final results to be so bad they would require an artist to further refine them in Photoshop to avoid that first label.

3) You are right in this point. Where AI is today, it would do a poor job with proper lighting and shading which is probably too advanced for the technology, but also: this is not the goal anyway. I'm not trying to make a tool that can do what an artist can do. A human will forever be required to make meaningfully beautiful and relevant art and it's foolish to think otherwise.

Rather, the aim here is to empower artists to bring their visions to life faster by having a noobie assistant they provide general guidance to, and step in when the noobie intern doesn't do it quite right.. If you feel like AI-assisted background creation does not align with this ultimate goal, what would help you bring your visions to life faster?

4) I wouldn't want people to use my tool to make everything for them since they would indeed lose ownership. The way I would want artists to use it is to come and say this:

Artist: Hey, look. For my next panel I need a scene of a shopping storefront where the characters will be going in. The streets is gonna be laid out like this, and the entrance is there, and these are sliding doors, etc... Can you do that for me?
Assistant: Sure, I will make it... [moments later] Here a basic line-art image. Is this good enough for you?
Artist: Yea, thanks. Now l will import this into Photoshop and fix some of the things I don't like but the general layout is good enough.

5) This is very strong criticism for which I have no response against. Overall, this does sound like it would be net negative with avid 3D blender users who would struggle with continuity of space.

This would be a better alternative for situations where you're not using a 3D space at all. It's pretty much just a "better background mob filler/abstract shapes" like a view from a top of a cliff which you probably wouldn't create a 3D space for or when you for example need to show a large crowd of people that all look slightly different. What are your thoughts on that?

The thing I value about a 3D model is that once I've made it, I can, unlike a photo (at least with photos I didn't take myself), get multiple angles of the same location. So for example, my comic is about knights and has Arthurian Legends stuff... and there's a scene where two characters talk at The Round table. I'm perfectly able to construct perspective from scratch, so if I only needed to draw one panel of a background, I'd just draw it from scratch, or use a photo if it's something tricky like a whole street, but if the scene goes on for several pages, I can't just use the same shot of the room from the same angle over and over again; I'll need shots from various distances and angles to keep things dynamic... like:

To make pages that look like:

So I want to create a consistent scene where the camera moves around, like the camera would in a live action show or movie, while retaining stylistic consistency. For me, setting up very simple 3D models, and then painting the lighting in myself based on my understanding of light and stuff, gives me the most consistent and pleasing results.

Personally, what I'd like is a tool that allows me to customize backgrounds in a Sims 2 Building mode fashion, being able to import materials, add textures, customize things without needing extra programs or heavy software like blender or similar, while still requiring my intervention to place, resize, acommodate, fix, tweak anything etc. I'm not expecting prompts or a few clicks to solve everything for me.

I love the Sims 2 and that's what I use for my backgrounds, the 3D ain't the best in the world but at least it feels a bit more original, but it takes up to an hour to load since I have lots of content in it, then of course the crashes of the program and the fact that I could get distracted with gameplay, sometimes I've lost entire lots due to corruption in the game, and also have to re-do them and end up being inconsistent. But as darthmongoose said, the thing about 3D modeling and arranging is that once its set we can change angels, perspective, camera effect and be able to re-use the material several times.

I've tried other software like Sweet Home and similar but the layout and tools never end up being comfortably to me for some reason.

I still have to give MangaKa2 a try since it's probably the only one program that seems to grasp a bit of what I'm looking for in a general sense, but the layout and hotkeys are still things I need to learn and get used to.

Regarding the website you've provided, nothing guarantees me that the gallery or database implemented won't have stolen content, anyone can type a "tehe this is 100% free to use" and be a total lie, as well that I wonder how exactly this IA plans to merge the material and the artist's, does that mean that in order to match the style of the artist it's going to be fed on it? As well... how can I guarantee that I'm going to get a similar results and not end up with something close but still uncanny valley?

If I were to implement IA I believe it would be great if the software could only take as resource and database my local files but making sure it doesn't leak to the internet, then, I believe I could use IA as an extra small-res window that would suggest ideas on composition based on the data it has been fed by me.
For example, I do a drawing but I'm not convinced about the color, the IA would suggest in it's window tool a version with different colors or compositions based on the things I generally do, to give me an Idea, instead of me tweaking several layers and trying different things then taking longer.
But I still have to do the job myself of then aquiring that result the IA offered, and not that it could re-arrange my picture and make it able to be saved.

I'm thinking like, those websites with stock images, you put a few words based on what you'd like and it provides you something close, but you still have to draw it on your own canvas be in traditional or digital. That's what I would call assistance.

1) you seem to misunderstand the issue. Yes the building blocks are the same at that level, but that's the level AI stays at. Everything that makes a world original is going to be in the specifics. How different esthetics, real and imagined are combined and fused to create something new. AI doesn't have direction, it doesn't fall in love with details or have sensibilities. The only way to get an original looking AI world specific to one artistes vision would be to feed it only that artist's art, which would negate the point. If you expose it to images of the gorgon fortress, you get a knockoff gorgon fortress and copyright issues.

2) you again seem to misunderstand how artists use tools. Filters are far from being a universal tool in art for one, I personnally have only ever used the lens blur to emulate a photo focus and many people have never touched them. Photoshop is first and foremost a photo editing tool that got coopted into an illustration tool. Many of the filters are mostly used in photo editing. The filter emulates currently existing tools from other artistic methods and a filter over a blank canvas will do very little to make the canvas worth anything. AI images however is akin to me importing an image I found online and claiming it as my own because I typed the keywords into Google. The closest thing to it would be photobashing, but people who do photobash have an artistic vision and their own sensibilities and they take their base images from consenting sources because they themselves are liable and responsable for their work. AI is not. You would have to find a way to render the tool useless without substancial human artistic interpretation, which I can't see a way of you doing without compromising the success of your program.

3) in that case you seem to be trying to reinvent free-to-use 3D models. If your aim is to make people get quicker at art, in the long run you would be doing them a disservice. I refer you back to my original point 2): the reason artists have trouble with backgrounds in the first place is usually a lack of practice. The way to get faster in the long run is to learn the hard way, create real neural pathways, practice perspective and lighting and compositing. General art practices of taking references, practicing drawing from life, and training your eye to project your environment onto the canvas is going to be the only real solution in the long run (even with the tool this would be necessary to be able to swiftly correct the errors made by the system).

4) in the scenario you devise, I can't imagine any real life case where I wouldn't have to redo the entire image that the AI gave me from scratch. Correcting lineart for anything more than very minor details would take longer than my usual process, which is fast because I'm used to it, a lot of it is instinctive and almost automated in itself because I'm just so used to it. That's also not accounting for style (which is also in the shapes, proportions, and line length and weight, as well as compromising environmental storytelling opportunities that come with space design). I don't know if you have a background in art, but it seems like you've never personnally tried correcting something like small a perspective error in lineart to understand how time consuming it is compared to redoing a sketch multiple times. The further you get away from the sketch phase the longer even small corrections take.

5) I am not an avid blender user. I learnt the basics last year in 3 weeks with youtube and I have exactly 2 blender files on my computer. My understanding of blender is very rudimentary but I already have all the tools I need to make basic shapes to block in the space which I can then paint over. My pages 32-34 are heavy in 3D paintover but the blender file has only a few low poly structures in it. As a rule, blender is easy to learn, free, and runs on just about anything if you only want the stuff you're describing "mob filler/abstract shapes". Stuff like crowds wouldn't be great either. Computers are notoriously bad at being truely random, so doppelgangers and uncanney valley would be inevitable. Plus, most artists know how to make a crowd with very little detail.

Generally I think you need to get a better understanding of the artist's process and artistic techniques. I get you want to do good and be useful, and that's a noble cause, but I get the feeling you don't have an intimate understanding of your target demographic's needs and where the timesinks are and which corners can be cut without the final product suffering. Again these are problems with the base concept, without going into specifics of problems with AI in art as a whole, such as ethical sourcing of training material and more generally what it's existence is doing to the treatment of artists by the public, people using it as leverage to underpay their artists for example. People are wary of AI because of the damage it's already done despite it being sub-par in terms of results right now.

You are right. I'm not an artist so I don't intimately understand the process like you do.

Because I'm aware that I'm not, I've had a dozens of hour-long conversations with artists to attempt to understand what their process is like and let them educate me where their bottlenecks are. The main problem you point are along these lines:

  • AI's fundamental lack of direction or artistic sensibilities
  • Having a generated horse in my canvas is not the same as photo-bashing one
  • May help artists deliver completed projects faster but at the detriment of improving their skills

Now, (1) is fundamentally true but why should it need to be good at doing that to be helpful?

Meaning if I'm moving houses and I need to rent a robot-helper from Tesla (because I need to vacate my apartment at 3AM and no local moving company is open for business that late). I don't need that robot to know how to assemble furniture. I just need it to pick up really heavy objects and move them from Room A to Room B and then I can assemble the fridge myself, right? Some things require human finesse but not all things. Surely this is translatable to all forms of human labor. There's always a part that could be delegated and other that can't.

This makes perfect sense in my mind but you seem to disagree with this premise. Help me understand, what I am missing??

I've asked every artist I talked to "If you could delegate one task away from your daily workflow, what would be?" and the overwhelming majority said background work. If you are telling me that this is not a problem worth solving, than what is a problem worth solving in your workflow?

Thank you for your response.

If you're curious to see it in action then join the waitlist. I'm still working out the kinks but I'm expecting to open a Private Beta to a handful of users mid-June (about a month from now)

From the gif on the site, it looks like the tool is basically a style filter using AI? People upload their photobash or 3D asset image and it will blend/ stylize the image to match their art? That's an interesting idea and if it somehow only allows user-made image assets. Othrewise, people could just upload stolen art to use as their image just like any other AI.

Putting copyright aside, how useful it is will depend on how good it is at its job. I've done a lot of photo-bashing and filters, and this stuff tend to be janky and require a lot of clean up. When I stylize the work manually, the cleanup would be baked into the stylize process, making it more streamlined.

In my opinion, it'd be more useful if it could generate 3D models.

The current approach may work for standalone illustrations or a single panel, but the majority of the time sunk into comic background comes from the regular/consistently-used spaces. So various angles/perspective points of the exact same space would be needed + being able to tweak certain objects as needed.