ChaoticNeutralCzech

joined 1 year ago
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 15 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 14 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 14 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Paint timelapse available!

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 6 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 13 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 13 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

7
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 12 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

29
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 12 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

26
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Yes, some Linux distros use blue kernel-panic screens too but I'm tagging the post [Windows] because that's the "franchise" where the "character" debuted.

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 11 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

22
Sakura (Random-tan Studio) (files.catbox.moe)
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 11 on Tapas

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb.). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

25
Katana (Random-tan Studio) (files.catbox.moe)
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 10 on Tapas

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

Edit: fixed image link. Who knew global variables in Python were this tricky?

14
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 10 on Tapas

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea 🇰🇷 and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

[–] [email protected] 6 points 7 months ago (5 children)

Why? Passenger trains and subways are already very safe thanks to remote control & monitoring systems, Deadman's switches etc. Many urban rail sstems don't use drivers at all! Is there a subway accident from the past 20 years that could have been prevented with an extra driver?

[–] [email protected] 3 points 7 months ago* (last edited 7 months ago) (1 children)

Sure, no algorithm is able to extract any more information from a single photo. But how about combining detail caught in multiple frames of video? Some phones already do this kind of thing, getting multiple samples for highly zoomed photos thanks to camera shake.

Still, the problem remains that the results from a cherry-picked algorithm or outright hand-crafted pics may be presented.

[–] [email protected] 71 points 7 months ago (9 children)

Laser printers more accurately "bake paper so that number powder sticks to it"

[–] [email protected] 3 points 7 months ago

I wonder if there is a notification ad blocker with community-submittted sets of regex patterns that root users can use.

[–] [email protected] 8 points 7 months ago

Finally we know...

[–] [email protected] 6 points 7 months ago (1 children)
[–] [email protected] 11 points 8 months ago

Here is the auto-generated transcript (for research purposes only)

oh isn't it a beautiful day how cutie
the sun's bright the birds are singing
nicely it's a good as day as any to go
and touch
grass yeah I I know I know um it's
important though I need to go to the
store it's been a while and you know
replenish the
pantry um it'll be okay though I promise
I'll be back soon it shouldn't take too
long and no no you don't you please
don't come with me I can touch enough
grass for the both of us and I promise
I'll be back soon okay nothing bad's
going to
happen but in case I don't come back
please feed
mocha and take care of her for
me
no I've got to go now before I check it
out I'll I'll see you later okay you
just stay nice and safe inside
okay good
cutie all right that was easy got
everything I need yeah okay I've got to
hurry home now uh I've probably got to
switch out the cuties
bandages maybe we refill the ice pack
and ah goodness oh I didn't see that
puddle it was surprisingly deep and now
my boots are
muddy that's okay though once I get home
I can wipe them huh holy goodness
gracious why is no one watching where
they're going right now I almost dropped
the eggs all right well that's okay
anyways it's that a car please bra
out think I'm okay oh no not another one
the in my
ankles no please stop this way please no
happens I need to get
home if I can make it
home

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago)

Die Sammlung wurde von einem deutschen Verlag verbreitet, die Bilder sind aber meistens US-Amerikanisch basiert (z. B. Bilder von Erde sehen immer so 🌎 aus). Ich habe nur drei deutschsprächige Bilder da gefunden! Denn sie komprimiert nur etwa 5-10 MB (abhängig von Methode) enthält, wurden auf dem CD-Version ein paar Stücke Shareware für Bildbearbeitung und Spiele mit reingeworfen, zusammen ist das aber nur etwa 33 MB, oder 5% Kapazität des CD-ROMs. Trotzdem haben sie mindestens 3 solche Volumen separat verkäuft! Und für 30 DM auch kleine Bücher mit Bildindex (denn sie von Disketten langsam laden) und Einleitung zum englischsprächigen Software. Die erste habe ich, die originale Datein aber nicht. Die Satire gegen IBM ist überall, meistens aber um 052-065, was ironisch ist, denn das Buch sich „5000 Cliparts für PC und Amiga“ nennt.

[–] [email protected] 13 points 9 months ago* (last edited 9 months ago) (2 children)

If you look at the collection, it is apparent that they often group unrelated clipart into one picture. Therefore, the four icons, the penguin, the bow and the "comic" panel are likely completely irrelevant to each other. Despite being bundled with DOS software, the monochrome pictures are likely best suited for the Mac's high-res monochrome screen, and many seem to have been made by Mac fans mocking PC users in comics like these. They would hot have known about Tux the Linux penguin back then so it's a generic penguin that does not represent Linux (but appears next to a robotic Mac in one picture??)
055

As for how the comic is supposed to be funny, I'd guess that the point is how difficult setting up a PC used to be(?) No idea if the thought bubble with the pirate is relevant but it is in a dithered area, meaning it's likely not meant to be cut and pasted elsewhere.

[–] [email protected] 6 points 9 months ago* (last edited 9 months ago)

⛨ ☭ ⇑⥣̂ ⚒ ☾̙╳☽̘

[–] [email protected] 4 points 10 months ago

I cannot stop laughing. Peak comedy.

view more: ‹ prev next ›