Google Nest SDM

2.3.1 should fix it…

I found it out. Thanks, Martin.

Thanks Martin

Did you had a chance to check 2.3.1 if this fixed the issue?

Yes is perfect!

1 Like

Hi Michel, these action cards were added in 2.3.1. Does it do the job with ignoring the cat…

1 Like

Cool :star_struck:.

Going to test it this weekend and will post the results here :+1:

Keep up the great work Martin!
Hoop dat Protect ondersteuning nog erin komt: GitHub - chrisjshull/homebridge-nest: Nest plugin for HomeBridge daar werkt ie ook prima overigens.

En hopelijk kan die dan over niet al te lange tijd de normale store in :grinning:

3 Likes

Hi Martin, can you add the following flow chart: THEN increase thermostat temperature with X degrees?

Hi Dude, will check it out. Thanks for the suggestion.

1 Like

@Martin_Verbeek,
Hallo Martin,
For the 2nd time in a few months, my homey stopped receiving data input from the Nest Hello.
I’ve tried restarting the app, homey and “maintenance” option. Nothing seems to fix the problem.
Last time I was forced to remove your app and reinstall it to get things back to work.
Is there an app easier work-around known?

Thanks in advance.
Raoul

Seems to be another issue. I have the same with my hello, thought I was the only one. But have another one reporting the same not related to this app. Looks like a google platform issue.

1 Like

Same problem here! Nest Hello stopped reporting input 4 days ago. Tried to remove device / reboot Homey. Device can still be added back but no data incoming

I am on Homey 5.02 and Nest SDK 2.2.2
Everything still works(Nest Hello and thermostat), no problems.

My Nest Hello came back today…

Update 3.0.0. will be released shortly. (i will be waiting until homey 6.0.0.rc release gives less trouble)

Thermostat : added action card to set temperature relative to current (-5 - +5 range)

New for camera type devices:
Have some fun with ML/AI (Google vision) for camera type devices. When the API is activated, by doing a Logout/Login in the app, you can play around with it.

The app will send images from events to the vision ML service to detect objects, face expressions and logos in the provided picture.

Trigger cards: (all with a confidence level of >70%)
Detection of a face expression (joy/sad/angry/wearing something/surprise)
Detection of a logo (tags with name of company, and where on the picture)
Detection that something happened to an object on the picture (object disappeared/reappeared, new object appeared/disappeared again) tags with name and where on the picture.

Action card:
Control Image Analytics. (On / Off / Reset). You should use On/Off at when there is insufficient light to get confident results. Reset can be used to set the initial situation for a device where objects that should be present normally are on the picture. The reset will be triggered when the next SOUND occurs for the device.

Where
Picture is divided in 9 areas, app computes the center of the object found and returns Top/Center/Bottom, Left/Center/Right combination in the tag Object Location

2 Likes

Same here!

Could you tell more about the ML/AI (Google vision)

See below a description of localizedObjects (one of the 3 ways the app is using Vision). Would love to do the annotations you see in the picture, but the image functions needed for that are too large to keep the app within reasonable size. I am still looking for a simple image manipulation library that fits the bill. You will get the annotation names and location back in the trigger that responds to object moves etc.

When you start with the function, or when you executed a reset action card, it will have an inventory of objects per device. This inventory is a sort of “expected” state. So when the bicycle disappears you can react to it by using a trigger, same when it reappears. Or when a new object enters the scene, and disappears again. The bounds are being used in the app to tell you where the object is found in the image. If you have more than 1 bicycle you would see bicylce#2 as the name that is shown.

Google website text

The Vision API can detect and extract multiple objects in an image with Object Localization .

Object localization identifies multiple objects in an image and provides a LocalizedObjectAnnotation for each object in the image. Each LocalizedObjectAnnotation identifies information about the object, the position of the object, and rectangular bounds for the region of the image that contains the object.

Object localization identifies both significant and less-prominent objects in an image.

Object information is returned in English only. The Cloud Translation can translate English labels into any of a number of other languages.

Image credit: Bogdan Dada on Unsplash ( annotations added ).

For example, the API might return the following information and bounding location data for the objects in the image above:

Name mid Score Bounds
Bicycle wheel /m/01bqk0 0.89648587 (0.32076266, 0.78941387), (0.43812272, 0.78941387), (0.43812272, 0.97331065), (0.32076266, 0.97331065)
Bicycle /m/0199g 0.886761 (0.312, 0.6616471), (0.638353, 0.6616471), (0.638353, 0.9705882), (0.312, 0.9705882)
Bicycle wheel /m/01bqk0 0.6345275 (0.5125398, 0.760708), (0.6256646, 0.760708), (0.6256646, 0.94601655), (0.5125398, 0.94601655)
Picture frame /m/06z37_ 0.6207608 (0.79177403, 0.16160682), (0.97047985, 0.16160682), (0.97047985, 0.31348917), (0.79177403, 0.31348917)
Tire /m/0h9mv 0.55886006 (0.32076266, 0.78941387), (0.43812272, 0.78941387), (0.43812272, 0.97331065), (0.32076266, 0.97331065)
Door /m/02dgv 0.5160098 (0.77569866, 0.37104446), (0.9412425, 0.37104446), (0.9412425, 0.81507325), (0.77569866, 0.81507325)

Mmmm interesting to play with… When my Hello Doorbell sees my car on the driveway it can open my front door “Nuki”