ChaoticNeutralCzech

joined 1 year ago
11
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 21 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

I also tried Upscayl but that took about 1000x longer and "reinterpreted" the entire picture in an anime style, which made lines thinner, lost detail etc:

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 20 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

51
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 20 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

Reference image is Mandelbrot set zoomed in by a factor of about 1 million, rotated 90ยฐ anticlockwise.
๐’™ = -๐“˜๐“ถ(๐’„) = -0.131,825,253,6 โˆ“ 0.0000011001; ๐’š = ๐“ก๐“ฎ(๐’„) = -0.7436447860 ยฑ 0.0000014668

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 19 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 19 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

I wonder where the beam would come from. The eyes?

26
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 18 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

Finally, a MorphMoe waifu where I would figure out what the reference was.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 18 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

46
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 17 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 17 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

17
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 16 on Tapas

I tried upscaled by waifu2x (model: upconv_7_anime_style_art_rgb) but it didn't go too well.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 16 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

 

Artist: Onion-Oni aka TenTh from Random-tan Studio
Original post: #Humanization 15 on Tapas (warning: JS-heavy site)

Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐Ÿ‡ฐ๐Ÿ‡ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.

[โ€“] [email protected] 5 points 6 months ago

The hole in the fuselage that caused them to be sucked out was actually made by one of them in a suicide/homicide. Very tragic. Somebody invest in mental health please!

[โ€“] [email protected] 9 points 6 months ago

Keyword: cirno head empty

[โ€“] [email protected] 24 points 6 months ago* (last edited 6 months ago)

Because

  1. When the internet was rolling out, a decentralized, open, best-effort solution of TCP/IP thankfully won over telephone companies' centralized system proposal
  2. IPv6 is still not universal for some damn reason
  3. Onion addresses solve these problems but good luck getting everyone aboard with Tor
  4. You always trade anonymity for reachability, and with the amount of threats, NAT and firewalls have been put up to make it harder for unsolicited requests to reach you by default
[โ€“] [email protected] 5 points 6 months ago

Don't go into Seine, you'll drown!

(I know it's pronounced [หˆzaษชฬฏnษ™] ๐Ÿ”Š, I can speak German)

[โ€“] [email protected] 4 points 6 months ago

A powerbank is another step in energy conversion and the cables are annoying.

[โ€“] [email protected] 2 points 7 months ago* (last edited 7 months ago)

I hope they have the decency to sell reversible AC units (double as heat pumps with just a few basic components extra) and don't overestimate their customers' heating power requirements (a 10kW heat pump can replace a 30kW furnace, it will just run a higher percentage of the time - heating used to be grossly overscaled because high-power furnaces were cheap).

[โ€“] [email protected] 6 points 7 months ago (1 children)

Yeah, it's fun but the temperature needs to be correct. With rising temperature, the paper goes black, light gray, brown and then glowing orange.

[โ€“] [email protected] 8 points 7 months ago

The IMU probably drifts by some small percentge but an intermittent GPS signal every few kilometers should ensure that it never gets too far off course.

[โ€“] [email protected] 11 points 7 months ago* (last edited 7 months ago)

I am not aware of any receipt printers using lasers - thermal printers have an array of resistors that get hot when necessary. I know how a laser printer works and it is hard to explain in 12 or so words. Inkjets are way easier, you can just say "squirt squirt oops". Anyway...

  1. A photosensitive drum gets a negative electrostatic charge.
  2. A laser shining through a rotating prism scans lines across the drum's surface. This removes charge from parts of the drum that should not be covered in toner.
  3. A high-voltage corona wire inside the toner reservoir charges an amount of toner positively.
  4. The charged drum rotates past the corona wire, getting covered in toner where its negative charge remains.
  5. Paper is pushed against the drum and the powdery toner is transferred to it.
  6. The paper continues into a fuser, a little oven where a heating element briefly makes the toner so hot that it melts, its powder particles making a permanent bond among themselves and with the paper. (The heater is usually stationary and heats the paper from below. The fuser drum that pushes paper against the heater can get sticky and pick up some of the toner, making images repeat down the page. This is the most common failure mode that cannot be resolved through regular maintenance such as replacing the toner cartridge and printing cleaning pages. However, almost all laser printers have a cheap fuser module or its drum available so it is usually worth replacing.)
[โ€“] [email protected] 2 points 7 months ago* (last edited 7 months ago) (1 children)

Oh, I forgot about the quality of US infrastrure. If an engineer needs to make a voice call to communicate to unpower the line because a train has derailed, that's a systemic problem. I think all metros in the EU have telemetry and any major railway implements ETCS. Weird that "safety first" means that schoolkids cannot see the eclipse but public transport infrastructure gets way underfunded.

Also, a certain "blue line" keeps going off the rails in the US. I read this out of context and thought the police staged a riot.

[โ€“] [email protected] 35 points 7 months ago* (last edited 7 months ago) (1 children)

Time travel is a prerequisite but don't worry, you can just

from __future__ import antigravity
view more: โ€น prev next โ€บ