mike_wooskey

joined 1 year ago
[–] [email protected] 1 points 3 months ago (4 children)

Yes, @[email protected], now knowing that I can use sentence syntax in automations, I have built 1 automation to handle my specific needs. But each trigger is a hardcoded value instead of a "variable". For example, trigger 1 is "sentence = 'what is the date of my birthday'" and I trigger an action conditionally to speak the value of input_date.event_1 because I know that's where I stored the date for "my birthday".

What would be awesome is your 2nd suggestion: passing the name of the input_date helper through to the response with a wildcard. I can't figure out how to do that. I've tried defining and using slots but I just don't understand the syntax. Which file do I define the slots in, and what is the syntax?

[–] [email protected] 1 points 3 months ago (1 children)

By "server log", do you mean traefik's log? If so, this is the only thing I could find (and I don't know what it means): https://lemmy.d.thewooskeys.com/comment/514711

[–] [email protected] 1 points 3 months ago (1 children)

From traefik's access.log:

{"ClientAddr":"192.168.1.17:45930","ClientHost":"192.168.1.17","ClientPort":"45930","ClientUsername":"-","DownstreamContentSize":21,"DownstreamStatus":500,"Duration":13526669,"OriginContentSize":21,"OriginDuration":13462593,"OriginStatus":500,"Overhead":64076,"RequestAddr":"whoami.mydomain.com","RequestContentSize":0,"RequestCount":16032,"RequestHost":"whoami.mydomain.com","RequestMethod":"GET","RequestPath":"/","RequestPort":"-","RequestProtocol":"HTTP/2.0","RequestScheme":"https","RetryAttempts":0,"RouterName":"websecure-whoami-vpn@file","ServiceAddr":"10.13.16.1","ServiceName":"whoami-vpn@file","ServiceURL":{"Scheme":"https","Opaque":"","User":null,"Host":"10.13.16.1","Path":"","RawPath":"","OmitHost":false,"ForceQuery":false,"RawQuery":"","Fragment":"","RawFragment":""},"StartLocal":"2024-04-30T00:21:51.533176765Z","StartUTC":"2024-04-30T00:21:51.533176765Z","TLSCipher":"TLS_CHACHA20_POLY1305_SHA256","TLSVersion":"1.3","entryPointName":"websecure","level":"info","msg":"","time":"2024-04-30T00:21:51Z"}
{"ClientAddr":"192.168.1.17:45930","ClientHost":"192.168.1.17","ClientPort":"45930","ClientUsername":"-","DownstreamContentSize":21,"DownstreamStatus":500,"Duration":13754666,"OriginContentSize":21,"OriginDuration":13696179,"OriginStatus":500,"Overhead":58487,"RequestAddr":"whoami.mydomain.com","RequestContentSize":0,"RequestCount":16033,"RequestHost":"whoami.mydomain.com","RequestMethod":"GET","RequestPath":"/favicon.ico","RequestPort":"-","RequestProtocol":"HTTP/2.0","RequestScheme":"https","RetryAttempts":0,"RouterName":"websecure-whoami-vpn@file","ServiceAddr":"10.13.16.1","ServiceName":"whoami-vpn@file","ServiceURL":{"Scheme":"https","Opaque":"","User":null,"Host":"10.13.16.1","Path":"","RawPath":"","OmitHost":false,"ForceQuery":false,"RawQuery":"","Fragment":"","RawFragment":""},"StartLocal":"2024-04-30T00:21:51.74274202Z","StartUTC":"2024-04-30T00:21:51.74274202Z","TLSCipher":"TLS_CHACHA20_POLY1305_SHA256","TLSVersion":"1.3","entryPointName":"websecure","level":"info","msg":"","time":"2024-04-30T00:21:51Z"}

All I can tell from this is that there is a DownstreatStatus of 500. I don't know what that means.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (6 children)

Thanks, @[email protected]. I didn't know you could use special sentence syntax in automations. That's pretty helpful because an action can be conditional, and I think you can even make them conditional based on which specific trigger fired the automation.

It still seems odd that I'd have to make separate automations for each helper I want to address (or separate automation conditions for each), as opposed to having the spoken command have a "variable" and then use that variable to determine which input help to return the value of. But if that's possible, maybe it's just beyond my skill level.

[–] [email protected] 1 points 3 months ago (3 children)

Thanks for helping, @[email protected].

Both traefik containers (on the "server" and "client" VMs) and the wireguard server container were built with TRAEFIK_NETWORK_MODE=host. The VMs can ping each other and the Wireguard containers can ping each other.

Both traefik containers were built with TRAEFIK_LOG_LEVEL=warn but I changed them both to TRAEFIK_LOG_LEVEL=info just now. There's a tad more info in the logs, but nothing that seems pertinent.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

Also, just to make sure the app is indeed running, I curled it from it's own container (I'm using myapp here instead of whoami, because whoami doesn't have a shell):

$ curl -L -k --header 'Host: myapp.mydomain.com localhost:8080

I can't seem to display html tags in this comment, but the results are the html tags for the web page for the app - so the app is up and running

[–] [email protected] 0 points 3 months ago (1 children)

Thanks so much for helping me troubleshoot this, @[email protected]!

Is the browser also using the LAN router for DNS? Some browsers are set to use DoT or DoH for DNS, which would mean they’d bypass your router DNS.

My browser was using DoH, but I turned it off and still have the same issue.

Do you also get “Internal Server Error” if you make the request with curl on the CLI on the laptop?

Yes, running curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51 on the laptop results in "Internal Server Error".

How did you check that mydomain is being resolved correctly on the laptop?

ping whoami.mydomain.com hits 192.168.1.51.

What do you get with curl from the other VM, or from the router, or from the host machine of the VM?

From the router:

Shell Output - curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0-
100    17  100    17    0     0   8200      0 --:--:-- --:--:-- --:--:-- 17000

100    21  100    21    0     0    649      0 --:--:-- --:--:-- --:--:--   649
Internal Server Error

From the wireguard client container on the "client" VM:

curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51
Internal Server Error

From the traefik container on the "client" VM:

$ curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51
Internal Server Error

From the "client" VM itself:

# curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51
Internal Server Error

From the wireguard container on the "server" VM:

# curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51
Internal Server Error

From the traefik container on the "server" VM (This is interesting. Why can't I ping from this traefik installation but a can from the other? But even though it won't ping, it did resolve to the correct IP):

$ ping whoami.mydomain.com
PING whoami.mydomain.com (192.168.1.51): 56 data bytes
ping: permission denied (are you root?)

From the "server" VM itself:

# curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51
Internal Server Error
[–] [email protected] 1 points 3 months ago (3 children)

Thanks for helping, @[email protected].

I'm browsing from my laptop on the same network as promox: 192.168.1.0/24

The tunnel is relevant in that my ultimate goal will be to have "client" in the cloud so I can access my apps from the world while having all traffic into my house be through a VPN.

The VM's IPs are 192.168.1.50 ("server") and 192.168.1.51 ("client"). They can see everything on their subnet and everything on their subnet can see them.

Everything is using my router for DNS, and my router points myapp.mydomain.com and whoami.mydomain.com to “client”. And by "everything" I mean all computers on the subnet and all containers in this project.

Both VMs and my laptop resolve myapp.mydomain.com and whoami.mydomain.com to 192.168.1.51, which is "client", and can ping it.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

Thanks for helping, @[email protected].

Both wireguard containers are using my router for DNS, and my router points myapp.mydomain.com and whoami.mydomain.com to "client".

[–] [email protected] 1 points 3 months ago (1 children)

I should add that I'm running Traefik 2.11.2 and wireguard from the Linuxserver image lscr.io/linuxserver/wireguard version v1.0.20210914-ls22.

[–] [email protected] 4 points 3 months ago (1 children)

They could choose a different business model to get revenue from their videos that doesn't rely on google or the current model where personal privacy is the commodity. It could also be a difficult transition. Is it worth it to them? To you?

[–] [email protected] 1 points 4 months ago (1 children)

I don't know if your problem is the same as mine was, but the symptom sounds the same.

The docker-compose.yaml file shown in the Forgejo documentation for docker installation shows this mount:

    volumes:
      - ./forgejo:/data

For me, Forgejo installed and created new resource files in /data and ignored the resource files gitea alread made.

I changed the volume to:

    volumes:
      - data:/var/lib/gitea

Forgejo then recognized the gitea resources.

27
self-cleaning litter box? (lemmy.d.thewooskeys.com)
 

Does anyone have any experience with self-cleaning cat litter boxes? The ability to connect one to Home Assistant doesn't really seem useful to me - maybe it would be nice for HA to alert you when the litter box needs to be changed. But I'm really just curious if any particular model of self-cleaning litter box is any good - even by itself, without any "smart" features. We now have 4 cats and it would be nice to not have to clean litter boxes manually 1-2 times every day.

 

I bought an old iPad2 for the purpose of viewing a Home Assistant dashboard via a web browser. My thinking was that the ability to browse the web was the sole requirement for a tablet for this purpose, but I was wrong: Home Assistant's web pages apparently require a newer version of javascript than iOS 9.3.5 can handle, but the iPad 2 can only be updated to iOS 9.3.5.

So is it possible to flash a newer OS (e.g., linux) onto an old iPad 2? ChatGPT says it's not possible because a bootloader exploit for the iPad 2 isn't known, but ChatGPT is often wrong.

 

I'm confused by the different elements of HA's voice assistant sentences.

  1. What's the difference between a conversation and an intent_script? Per HA's custom sentence example, a conversation has an intents sub-element, and an intent_script doesn't. Does a conversation's intent merely declare the element that will respond to the sentence, while an intent_script is purely the response (i.e., does an intents point to an intent_script)?

  2. HA then explains that while the example above defined the conversation and intent_script in configuration.yaml, you can also define intents in config/custom_sentences/. Should you use both of these methods simultaneously or will it cause conflict or degrade performance? I wouldn't think you should define the same sentence in both places, but the data structure for their 2 examples are different - is 1 better than the other?

In configuration.yaml:

conversation:
  intents:
    YearOfVoice:
      - "how is the year of voice going"

In config/custom_sentences/en:

intents:
  SetVolume:
    data:
      - sentences:
          - "(set|change) {media_player} volume to {volume} [percent]"
          - "(set|change) [the] volume for {media_player} to {volume} [percent]"
  1. Then they say responses for existing intents can be customized as well in config/custom_sentences/. What's the difference between a response and an intent_script? It seems like intent_script can only be defined in configuration.yaml and responses can only be defined in config/custom_sentences/` - is that right?

Thanks for any clarification you can share.

 

I have a Dreametech L10s Ultra vacuum that HA recognizes via the Xiaomi Miot Auto integration. I'm trying to add a custom:xiaomi-vacuum-map-card to a dashboard, and the vacuum is recognized but the camera (which I guess is the map) isn't working due to "Invalid calbiration". But the calibration is whatever was automatically set by the card when I chose the vacuum. Hmmm.

I have the camera/map set in configuration.yaml as follows:

camera:
  - platform: xiaomi_cloud_map_extractor
    host: !secret xiaomi_vacuum_host
    token: !secret xiaomi_vacuum_token
    username: !secret xiaomi_cloud_username
    password: !secret xiaomi_cloud_password
    draw: ['all']
    attributes:
      - calibration_points
      - charger
      - cleaned_rooms
      - country
      - goto_path
      - goto_predicted_path
      - goto
      - ignored_obstacles_with_photo
      - ignored_obstacles
      - image
      - is_empty
      - map_name
      - no_go_areas
      - no_mopping_areas
      - obstacles_with_photo
      - obstacles
      - path
      - room_numbers
      - rooms
      - vacuum_position
      - vacuum_room_name
      - vacuum_room
      - walls
      - zones

This vacuum has not been Valetudo-ed - it's in new condition from the vendor.

Does anyone have any suggestions?

 

I'm confused by the different elements of HA's voice assistant sentences.

  1. What's the difference between a conversation and an intent_script? Per HA's custom sentence example, a conversation has an intents sub-element, and an intent_script doesn't. Does a conversation's intent merely declare the element that will respond to the sentence, while an intent_script is purely the response (i.e., does an intents point to an intent_script)?

  2. HA then explains that while the example above defined the conversation and intent_script in configuration.yaml, you can also define intents in config/custom_sentences/. Should you use both of these methods simultaneously or will it cause conflict or degrade performance? I wouldn't think you should define the same sentence in both places, but the data structure for their 2 examples are different - is 1 better than the other?

In configuration.yaml:

conversation:
  intents:
    YearOfVoice:
      - "how is the year of voice going"

In config/custom_sentences/en:

intents:
  SetVolume:
    data:
      - sentences:
          - "(set|change) {media_player} volume to {volume} [percent]"
          - "(set|change) [the] volume for {media_player} to {volume} [percent]"
  1. Then they say responses for existing intents can be customized as well in config/custom_sentences/. What's the difference between a response and an intent_script? It seems like intent_script can only be defined in configuration.yaml and responses can only be defined in config/custom_sentences/` - is that right?

Thanks for any clarification you can share.

 

My goal is to be able to sync podcast episodes (the actual audio files) and their play state (played or unplayed, how many minutes I've already listened to) between devices, so I can stop listening to an episode on my phone, for example, and continue listening to the same episode on my desktop computer (continuing from the point in the episode where I stopped listening on my phone).

I'm using AntennaPod on GrapheneOS (Android 14), and for desktop podcast listening I'm using Podfetch (self hosted). I'm also self-hosting a GPodder instance, and in Podfetch I have GPODDER_INTEGRATION_ENABLED set to true.

In AntennaPod, I'm able to configure Synchronization to GPodder.net (though my own instance of GPodder is at a different domain, AntennaPod calls the GPodder configuration "GPodder.net"), enter my self-hosted URL and credentials, and AntennaPod logs in, but it fails to sync. I don't know where AntennaPod's logs are so I don't have any details about why the sync fails.

Also confusing to me is how to manage podcast subscriptions. It seems I can manually add podcasts to either GPodder or Podfetch, but adding a podcast to one doesn't add it to the other. The same happens with episodes: if I manually add the same podcast to both GPodder and Podfetch and download an episode in one environment, the episode isn't also downloaded in the other.

Has anyone successfully got these 3 apps working together? Can you help me figure out what I'm doing wrong?

Thanks!

 

I have some Atom Echos installed as HA remote voice assistants. They're very cool, but they seem to say "I'm sorry I didn't understand that" a bit too often when I'm not addressing them.

The Echos are thinking I'm giving them commands when there's a discussion between people in the room or when a show/movie/music is playing. I have a custom wakeword, but I don't think any sounds happening in the background are sounding like it - there is only 1 word in English that rhymes with the first part of my wake word.

So I'm wondering if there's a way to configure the Echos to be more strict on what they consider to be the wakeword, or to be less attentive to ambient sound (or to require a more direct command, like "WAKEWORD" said kind of loud).

 

I got some Atom Echos, configured them, and they work! I even customized my own wakeword and it worked on the first try. Thanks, Home Assistant team, for such an awesome product as Home Assistant and for fantastic documentation.

Though the Echos and voice recognition works, I'm waiting about 28 seconds between speaking and having Home Assistant respond. "OK Nabu, do the thing"...then I wait ~28 seconds and then at the same time I hear the Echo say "Done" and Home Assistant responds.

Is the delay due to the Echos being small/cheap/slow processors? They react instantly to the wakeword, but perhaps that requires less processing power because it's trained. Is the delay due to forwarding the audio content of my spoken word over the network to Home Assistant so Whisper can process it? I'm able to transfer other content over my network very quickly, and I doubt the data size of a few spoken words is very large. Is the delay in Whisper processing my spoken command?

What has your experience been with the Echos and openwakeword?

 

Howdy. I have a bash called backup.sh script in /config and I've added the shell_command to configuration.yaml:

shell_command:
  backup: /root/config/backup.sh

I'm running HAOS, the shell script has the correct owner:group and permissions. I can execute the script when I ssh into HAOS, but when I call the Shell Command: backup service from HA's Developer Tools, I get:

stderr: "/bin/sh: /root/config/gitupdate.sh: not found"
returncode: 127

Any thoughts on this?

 

Howdy. I have HAOS running in a Virtualbox VM on a computer on my private subnet (let's call it the .150 subnet). All my IoT devices are on my .151 subnet. HA can see most of my IoT devices because I'm not currently isolating the subnets, but my vacuum is defying discovery because of UDP crossing the subnets. I'm sure there's a way to configure the router to allow cross-subnet discovery, but it would just be better all around if HAOS was on the IoT subnet.

Is it possible to make HAOS think it's on the .151 subnet, even though the host computer for the VM running HAOS is on the .150 subnet?

I've read briefly about Virtualbox's networking features, but I not only know nothing about them, I don't even know generally whether a VM can be configured to be on a different subnet than it's host. I would think not, because when I do isolate the subnets, nothing that's physically on the .151 subnet would be able to see the host computer on the .150 subnet to get to the VM that thinks it's on the .151 subnet. But I'm guessing.

Also, HA has some network configs:

I changed these from .150 to .151 but simply lost connectivity to HA (thankfully, it's super easy to restore from a VM snapshot!).

I'd appreciate any help.

 

My new Dreametech L10s Ultra has been great so far and it does fine on my main floor (with the base station) and in my basement. But when I take it to my 2nd floor, it positions correctly and says "start cleaning", but then it spins once and says "please return robot to the base station". It already successfully mapped the 2nd floor.

Has anyone experienced this? Why does it sense where it is and begin cleaning, but then immediately stop and need the base station? It's done it numerous times. The battery is fully charged, the mop pads are cleaned and dried.

Thanks for any assistance or ideas.

 

I'm trying to get my new Dreamtech L10s Ultra (robot vacuum) to be discovered by Home Assistant, but they're on different subnets and I found an explanation that there are sometimes problems discovering devices across subnets. This seems odd to me because the Xiaomi Miot Auto integration in Home Assistant saw my L10s and even knew it’s IP address - but perhaps that’s TCP and the problem is that UDP can’t cross subnets?

The article says there are 2 ways to possibly overcome the cross-subnet issue: put the devices on the same subnet (currently not an option for me), and “configure IP masquearding on the outgoing routing interface for the subnet where the MI device resides.” With GPT’s help, I tried to add IP Masquerading (which I guess is just NAT), but it’s not working. I’m pretty confident I did it wrong.

My networking knowledge is very basic. Can anyone help me configure my pfSense so that my L10s on one subnet can be discovered by Home Assistant (technically, by the Xiaomi Miot Auto integration in Home Assistant) on the other subnet?

view more: ‹ prev next ›