ChaoticNeutralCzech

joined 1 year ago
[โ€“] [email protected] 2 points 1 week ago

commuting

TIL gazebos can live and work in different places

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Electric girls on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

This is the last one in the series. Bye!

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Electric girls on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Electric girls on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

[โ€“] [email protected] 9 points 2 weeks ago (1 children)
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Gazebo on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: The Infinity Gauntlet on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

14
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Frostpunk Automaton on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

See also: Land Dreadnought

18
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Crabsquid on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

See also: Seamoth and other Subnautica creatures in the comments

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: D20 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

22
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Knifehead Kaiju on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Robot (vacuum) cleaner on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

16
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: The Satellite-girl on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

This is the Horizon satellite from Random-tan Studio's cybermoe comic Sammy, page 18, prior to remastering.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: Watchers on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

[โ€“] [email protected] 6 points 3 weeks ago

There should be four in the front and four in the back to match the number of hemispheres.

[โ€“] [email protected] 3 points 4 weeks ago

The posts seem to be getting better lately

[โ€“] [email protected] 6 points 1 month ago* (last edited 2 weeks ago) (2 children)

TenTh also did Crabsquid but I'm posting the images in order, it will be its turn in about 2 weeks. (Edit: here)

Here are some results from elsewhere on the internet, mostly by Dino-Rex-Makes. Feel free to feed the links to your posting script and schedule them.

Lava Larva
Pengwings
Crabsquid+Ampeel
Peeper
Peeper
Cuddlefish
Sea Monkey
Crashfish
Crashfish
Warper
Mesmer
Mesmer
Yellow Sub-MER-ine

[โ€“] [email protected] 2 points 1 month ago (1 children)

You know.

I don't... Is there a disgusting story specific to the flamethrower?

Anyway, Elon Musk's enterprises were never not full of stupid ideas. He wanted to pay for his extensive tunnel network just by selling bricks from the displaced soil. Did he expect millions of them to go for hundreds of dollars like limited-edition Supreme-branded ones? Or consider why roads were ever built on the surface if tunnels were so easy and profitable?

Around this time, he also claimed that he had perfected solar roof tiles while the demo houses actually featured no functional prototypes. The few units delivered were bad at either purpose. This didn't get nearly as much backlash as it should have but hyperloop hype was still strong back then.

[โ€“] [email protected] 3 points 1 month ago (2 children)

This is one of the more realistic body shapes you'll see on [email protected].

If you want to block all moe communities, they are conveniently listed in the sidebar.

[โ€“] [email protected] 1 points 1 month ago (1 children)

In real mirror pics, the phone is always perfectly aligned with the frame (obviously).

[โ€“] [email protected] 6 points 1 month ago

Needs more ads plastered at weird spots.

[โ€“] [email protected] 11 points 1 month ago* (last edited 1 month ago)
[โ€“] [email protected] 1 points 1 month ago* (last edited 1 month ago)

Actually, shaggy mane (Coprinus comatus) is edible.

view more: next โ€บ