The Doorbell device type is implemented using two traits:
PushAvStreamTransportTrait,
which handles audio and video stream transport using push-based protocols, and
WebRtcLiveViewTrait,
which provides the ability to control livestreams and talkback.
Always check for attribute and command support for a device prior to using any features or attempting to update attributes. See Control devices on iOS for more information.
| Home APIs Device Type | Traits | Swift Sample App | Use Case |
|---|---|---|---|
|
Doorbell
A device actuated by a button outside a door that makes an audible and/or visual signal, used to request the attention of a person who is somewhere on the other side of the door. Doorbells may feature accessible livestreams, two-way talkback, or detection events. |
Required Traits google PushAvStreamTransportTrait google WebRtcLiveViewTrait |
Doorbell |
Get basic information about a device
The BasicInformation
trait includes information like vendor name, vendor ID, product ID,
product name (includes model information), software version,
and the serial number for a device:
// [START get_device_information] let vendorName = basicInfoTrait.attributes.vendorName! let vendorID = basicInfoTrait.attributes.vendorID! let productID = basicInfoTrait.attributes.productID! let productName = basicInfoTrait.attributes.productName! let softwareVersion = basicInfoTrait.attributes.softwareVersion! let serialNumber = basicInfoTrait.attributes.serialNumber! // [END get_device_information]
Get the most recent time of device cloud contact
To find the most recent time that the device had contact with the cloud, use
the lastContactTimestamp attribute of the
ExtendedGeneralDiagnostics
trait:
if let lastContactTimeStamp = extendedGeneralDiagnosticsTrait.attributes.lastContactTimestamp { self.lastContactTime = Date(timeIntervalSince1970: Double(lastConnectedTimeStamp)) }
Check connectivity for a device
Connectivity for a device is actually checked at the device type level because some devices support multiple device types. The state returned is a combination of the connectivity states for all traits on that device.
let lightConnectivity = dimmableLightDeviceType.metadata.sourceConnectivity .connectivityState
A state of partiallyOnline may be observed in the case of mixed device types
when there is no internet connectivity. Matter standard
traits may still be online due to local routing, but cloud-based traits will be
offline.
Start a livestream
To start a livestream, send the Session Description Protocol (SDP) string
to the
WebRtcLiveViewTrait
trait's
startLiveView(offerSdp:)
method, which returns three values:
- The SDP for the session.
- The session duration in seconds.
- The session ID, which may be used to extend or terminate the session.
public func sendOffer(offerSdp: String) async throws
-> (answerSdp: String, mediaSessionId: String, liveViewDuration: TimeInterval)
{
do {
// Sending StartLiveView command
let response = try await liveViewTrait.startLiveView(
offerSdp: offerSdp
)
// Received StartLiveView response
return (
answerSdp: response.answerSdp,
mediaSessionId: response.mediaSessionId,
liveViewDuration: TimeInterval(response.liveSessionDurationSeconds)
)
} catch {
// Failed to send StartLiveView command
throw error
}
}
Extend a livestream
Livestreams have a preset duration after which they expire. To lengthen the
duration of an active stream, issue an extension request using the
extendLiveView(mediaSessionId:optionalArgsProvider:)
method:
public func extendLiveView(mediaSessionId: String) async throws {
do {
// Extending live view
let extendedDuration = try await liveViewTrait.extendLiveView(mediaSessionId: mediaSessionId)
} catch {
// Failed to extend live view
throw error
}
}
Start and stop talkback
To start talkback, call the
WebRtcLiveViewTrait
trait's startTalkback(mediaSessionId:optionalArgsProvider:) method.
To stop, use stopTalkback(mediaSessionId:).
public func toggleTwoWayTalk(isOn: Bool, mediaSessionId: String) async throws {
do {
if isOn {
try await liveViewTrait.startTalkback(mediaSessionId: mediaSessionId)
} else {
try await liveViewTrait.stopTalkback(mediaSessionId: mediaSessionId)
}
} catch {
throw HomeError.commandFailed("Failed to toggle twoWayTalk: \(error)")
}
}
Enable and disable recording capability
To enable the camera's recording capability, pass
TransportStatusEnum.Active
to the
PushAvStreamTransportTrait
trait's
setTransportStatus(transportStatus:optionalArgsProvider:)
method. To disable the recording capability, pass it
TransportStatusEnum.Inactive.
In the following example, we wrap these calls in a single call that uses a
Boolean to toggle the recording capability:
public func toggleIsRecording(isOn: Bool) {
self.uiState = .loading
guard let pushAvStreamTransportTrait else {
// PushAvStreamTransportTrait not found.
return
}
Task {
do {
try await pushAvStreamTransportTrait.setTransportStatus(
transportStatus: isOn ? .active : .inactive)
if isOn {
do {
self.player = try self.createWebRtcPlayer()
} catch {
// Failed to initialize WebRtcPlayer
self.uiState = .disconnected
return
}
await self.player?.initialize()
self.uiState = .live
} else {
self.player = nil
self.uiState = .off
}
} catch {
// Failed to toggle onOff
}
}
}
Enabling or disabling the camera's recording capability is the same as turning the camera video on or off. When a camera's video is on, it is recording (for purposes of events and related clips).
When the recording capability is disabled (the camera video is off):
- The camera can still show as online per the
connectivityStateof the device type. - The livestream cannot be accessed, nor does the camera detect any cloud events.
Check to see if recording capability is enabled
To determine if a camera's recording capability is enabled, check to see if any connections are active. The following example defines two functions to do this:
public func isDeviceRecording() -> Bool {
guard let pushAvStreamTransportTrait else {
// PushAvStreamTransportTrait not found.
return false
}
guard
let hasActiveConnection =
pushAvStreamTransportTrait
.attributes
.currentConnections?
.contains(where: { $0.transportStatus == .active })
else {
return false
}
return hasActiveConnection
}
Battery settings
Various battery settings can be controlled through the Home APIs.
Set the battery usage preference
Setting the energy balance lets you configure the tradeoff between battery life and performance for a device. You can create different battery profiles, such as "Extended," "Balanced," and "Performance," and switch between them.
This feature is implemented by updating the
currentEnergyBalance attribute
of the EnergyPreference
trait. The attribute accepts an integer
index that corresponds to a specific profile defined in the
device's energyBalances list (for example, 0 for EXTENDED,
1 for BALANCED, and 2 for PERFORMANCE).
A null value for currentEnergyBalance indicates the device is using a
custom profile. This is a read-only state.
The following shows an example of a structure that the currentEnergyBalance
attribute will use, followed by the actual code snippet that uses the attribute.
// Example energyBalances list { "energy_balances": [ { "step": 0, "label": "EXTENDED" }, { "step": 50, "label": "BALANCED" }, { "step": 100, "label": "PERFORMANCE" } ] }
private func setBatteryUsage(to option: UInt8) async throws { _ = try await energyPreferenceTrait.update { $0.setCurrentEnergyBalance(option) } }
Turn on automatic battery saver
To configure this feature, update the
currentLowPowerModeSensitivity attribute
of the EnergyPreference
trait. This attribute uses an index
to select a sensitivity level, where 0 typically represents Disabled
and 1 represents Enabled or Automatic.
private func setAutoBatterySaver(to value: Bool) async throws { _ = try await energyPreferenceTrait.update { $0.setCurrentLowPowerModeSensitivity(value ? 1 : 0) } }
Get the battery charging state
To get the device's current charging state (charging, fully charged, or not
charging), use the
batChargeState
attribute of the
PowerSource
trait.
self.chargingState = powerSourceTrait.attributes.batChargeState var description: String switch self.chargingState { case .isCharging: description = "Charging" case .isAtFullCharge: description = "Full" case .isNotCharging: description = "Not Charging" default: description = "Unknown" }
Get the battery level
To get the current battery level, use the
batChargeLevel
attribute of the
PowerSource
trait. The level is either OK, Warning (low), or Critical.
self.batteryLevel = powerSourceTrait.attributes.batChargeLevel var description: String switch self.batteryLevel { case .ok: description = "OK" case .warning: description = "Warning" case .critical: description = "Critical" default: description = "Unknown" }
Get the power source
To determine the power source the device is using,
use the BatPresent
and
wiredPresent
attributes of the
PowerSource
trait.
if powerSourceTrait.attributes.wiredPresent ?? false { self.powerSourceType = .wired } else if powerSourceTrait.attributes.batPresent ?? false { self.powerSourceType = .battery } else { self.powerSourceType = nil }
Audio settings
Various audio settings can be controlled through the Home APIs.
Turn microphone on or off
To turn the device's microphone on or off, update the
microphoneMuted
attribute of the CameraAvStreamManagementTrait trait using the built-in
setMicrophoneMuted function:
// Turn the device's microphone on or off
func setMicrophone(on: Bool) async {
do {
_ = try await self.cameraAvStreamManagementTrait?.update {
$0.setMicrophoneMuted(!on)
}
} catch {
// Error
}
}
Turn audio recording on or off
To turn audio recording on or off for the device, update the
recordingMicrophoneMuted
attribute of the CameraAvStreamManagementTrait trait using the built-in
setRecordingMicrophoneMuted function:
// Turn audio recording on or off for the device
func setAudioRecording(on: Bool) async {
do {
_ = try await self.cameraAvStreamManagementTrait?.update {
$0.setRecordingMicrophoneMuted(!on)
}
} catch {
// Error
}
}
Adjust the speaker volume
To adjust the speaker volume for the device, update the
speakerVolumeLevel
attribute of the CameraAvStreamManagementTrait trait using the built-in
setSpeakerVolumeLevel function:
// Adjust the camera speaker volume
func setSpeakerVolume(to value: UInt8) async {
do {
_ = try await cameraAvStreamManagementTrait.update {
$0.setSpeakerVolumeLevel(value)
}
} catch {
// Error
}
}
Activity zone settings
The ZoneManagement trait provides an interface for managing custom
regions of interest (activity zones) on camera and doorbell devices.
These zones are used to filter event detection (such as person
or vehicle motion) to specific areas within the device's field of view.
Activity zones are configured by the user within a partner application, allowing them to draw zones over specific areas in the camera's field of view. These user-defined zones are then translated into the structures used by this trait. For more information on how activity zones work, see Set up and use Activity Zones.
Activity zones are typically defined using 2D Cartesian coordinates.
The trait provides the
TwoDCartesianVertexStruct for vertices and the
TwoDCartesianZoneStruct
for the zone definition (name, vertices, color, and usage).
Check activity zones
To display activity zones, check the
zones
attribute of the ZoneManagement trait.
let zoneManagementTrait: Google.ZoneManagementTrait self.zones = zoneManagementTrait.attributes.zones ?? []
Add an activity zone
To create a new zone, use the
createTwoDCartesianZone command. This command takes a TwoDCartesianZoneStruct,
which defines the zone's name, vertices, color, and usage.
The following example shows how to create a zone named "Front Porch" with four vertices, colored salmon (#F439A0), and used for motion detection.
import GoogleHomeSDK import GoogleHomeTypes func createFrontPorchZone(trait: Google.ZoneManagementTrait) async { // 1. Define the vertices for the zone (2D Cartesian coordinates) // Values are UInt16, typically scaled to the device's twoDCartesianMax. let vertices = [ Google.ZoneManagementTrait.TwoDCartesianVertexStruct(x = 260, y = 422), Google.ZoneManagementTrait.TwoDCartesianVertexStruct(x = 1049, y = 0), Google.ZoneManagementTrait.TwoDCartesianVertexStruct(x = 2048, y = 0), Google.ZoneManagementTrait.TwoDCartesianVertexStruct(x = 2048, y = 950), Google.ZoneManagementTrait.TwoDCartesianVertexStruct(x = 1630, y = 1349), Google.ZoneManagementTrait.TwoDCartesianVertexStruct(x = 880, y = 2048), Google.ZoneManagementTrait.TwoDCartesianVertexStruct(x = 0, y = 2048), Google.ZoneManagementTrait.TwoDCartesianVertexStruct(x = 638, y = 1090) ] // 2. Define the zone structure using the given SDK struct let newZone = Google.ZoneManagementTrait.TwoDCartesianZoneStruct( name: "Front Porch", use: [.motion], // ZoneUseEnum.motion vertices: vertices, // Color is a hex string (for example, Salmon/Pink) color: "#F439A0" ) do { // 3. Execute the raw command to add the zone to the device // This returns the created zone's ID (UInt16). var newZoneID = try await trait.createTwoDCartesianZone(zone: newZone) } catch { // Error } }
Update an activity zone
To update an existing zone, use the
updateTwoDCartesianZone command. This command requires the zoneId and the updated
TwoDCartesianZoneStruct.
let zoneManagementTrait: Google.ZoneManagementTrait let zoneID: UInt16 let zone: Google.ZoneManagementTrait.TwoDCartesianZoneStruct do { _ = try await zoneManagementTrait.updateTwoDCartesianZone( zoneID: zoneID, zone: zone) } catch { // Error }
Delete an activity zone
To remove a zone, use the
removeZone command with the specific zoneId.
let zoneManagementTrait: Google.ZoneManagementTrait let zoneID: UInt16 do { _ = try await zoneManagementTrait.removeZone(zoneID: zoneID) } catch { // Error }
Sound Event Triggers
The AvStreamAnalysis trait provides an interface for managing event detection
triggers on camera and doorbell devices. While vision-based triggers
(such as people or vehicles) can be zone-specific, sound-related triggers
are typically device-level configurations.
The following trigger types are available for sound detection with the
EventTriggerTypeEnum:
| Mode | Enum value | Description |
|---|---|---|
| Sound | Sound |
General sound detection. |
| Person talking | PersonTalking |
Detects speech. |
| Dog bark | DogBark |
Detects canine vocalizations. |
| Glass break | GlassBreak |
Detects the sound of breaking glass. |
| Smoke alarm | SmokeAlarm |
Detects smoke alarms, often recognized by the T3 audible pattern (three short beeps followed by a pause). |
| CO alarm | CoAlarm |
Detects carbon monoxide (CO) alarms, typically recognized by the T4 audible pattern (four short beeps followed by a pause). |
Check sound detection status
To display the current state of sound detection to the user, you must check what the device supports and what is enabled by the device hardware. The two attributes to check are:
In iOS development, you typically would access the AvStreamAnalysis trait
from the device to read these attributes.
// Example struct to store event triggers public struct EventTrigger: Equatable { public var id: Google.AvStreamAnalysisTrait.EventTriggerTypeEnum public var enabled: Bool } let avStreamAnalysisTrait: Google.AvStreamAnalysisTrait let possibleEventTriggers = avStreamAnalysisTrait.attributes.supportedEventTriggers ?? [] let enabledEventTriggers = avStreamAnalysisTrait.attributes.enabledEventTriggers ?? [] let eventTriggers [EventTrigger] = [] for trigger in possibleEventTriggers { self.eventTriggers.append( EventTrigger( id: trigger, enabled: enabledEventTriggers.contains(trigger) ) ) }
Update the set of enabled triggers
To update the set of enabled triggers, use the
SetOrUpdateEventDetectionTriggers
command, which takes a list of EventTriggerEnablement structures.
// Example struct to store event triggers public struct EventTrigger: Equatable { public var id: Google.AvStreamAnalysisTrait.EventTriggerTypeEnum public var enabled: Bool } let avStreamAnalysisTrait: Google.AvStreamAnalysisTrait let eventTriggers: [EventTrigger] let enabledEventTriggers = eventTriggers.map { Google.AvStreamAnalysisTrait.EventTriggerEnablement( eventTriggerType: $0.id, enablementStatus: $0.enabled ? .enabled : .disabled ) } try await avStreamAnalysisTrait.setOrUpdateEventDetectionTriggers( eventTriggerEnablements: enabledEventTriggers )
Recording modes
The RecordingMode trait provides an interface for managing the video
and image recording behavior on camera and doorbell devices. It allows
users to choose between continuous recording, event-based recording,
or disabling recording entirely (Live View only).
The RecordingModeEnum
defines the available recording strategies:
| Mode | Enum value | Description |
|---|---|---|
| Disabled | Disabled |
Recording is completely disabled. Used mainly by legacy devices. |
| CVR (Continuous Video Recording) | Cvr |
Video is recorded 24x7. Requires a subscription (for example, Google Google Home Premium. |
| EBR (Event Based Recording) | Ebr |
Recording is triggered by events (person, motion). Video length depends on the event duration and subscription. |
| ETR (Event Triggered Recording) | Etr |
Short preview recording (for example, 10 seconds) triggered by events. |
| Live View | LiveView |
Recording is disabled, but users can still access the livestream. |
| Still Images | Images |
Snapshots are recorded instead of video when events occur. |
Check recording modes
To display the current recording configuration, check the attributes
of the RecordingMode trait:
supportedRecordingModes- All potential modesavailableRecordingModes- Selectable modesselectedRecordingMode- The active mode
// Example struct to store recording modes. public struct RecordingMode: Hashable { public let id: UInt8 public let mode: Google.RecordingModeTrait.RecordingModeEnum } let recordingModeTrait: Google.RecordingModeTrait if let availableRecordingModes = recordingModeTrait.attributes.availableRecordingModes, let supportedRecordingModes = recordingModeTrait.attributes.supportedRecordingModes, let selectedRecordingMode = recordingModeTrait.attributes.selectedRecordingMode { var recordingModes: [RecordingMode] = [] for recordingModeId in availableRecordingModes { guard Int(recordingModeId) < supportedRecordingModes.count, Int(recordingModeId) >= 0 else { // Out of bounds error } recordingModes.append( RecordingMode( id: recordingModeId, mode: supportedRecordingModes[Int(recordingModeId)].recordingMode, ) ) } }
Change the recording mode
Before updating, ensure that the chosen index from the supportedRecordingModes
attribute is present in the availableRecordingModes attribute.
To update the selected mode, use the
setSelectedRecordingMode function, passing the index of the chosen mode:
let recordingModeTrait: Google.RecordingModeTrait let recordingModeID: UInt8 _ = try await recordingModeTrait.update { $0.setSelectedRecordingMode(recordingModeID) }
Other settings
Various other settings can be controlled through the Home APIs.
Turn night vision on or off
To turn night vision on or off for the camera, use TriStateAutoEnum
to update the
nightVision
attribute of the CameraAvStreamManagementTrait trait using the built-in
setNightVision function:
// Turn night vision on or off
func setNightVision(
to value: Google.CameraAvStreamManagementTrait.TriStateAutoEnum
) async {
do {
_ = try await cameraAvStreamManagementTrait.update {
$0.setNightVision(value)
}
} catch {
// Error
}
}
Change the brightness of the status LED
To change the brightness of the status LED, use
ThreeLevelAutoEnum
to update the
statusLightBrightness
attribute of the CameraAvStreamManagementTrait trait using the built-in
setStatusLightBrightness function:
// Set the LED brightness
func setStatusLightBrightness(
to value: Google.CameraAvStreamManagementTrait.ThreeLevelAutoEnum
) async {
do {
_ = try await cameraAvStreamManagementTrait.update {
$0.setStatusLightBrightness(value)
}
} catch {
// Error
}
}
Change the camera viewport
The camera viewport is the same as the Zoom and Crop feature described in the Zoom and enhance Nest camera video support article.
The viewport is defined in a ViewportStruct that contains four values, which
are used as the coordinates of the viewport. The coordinates are defined as:
(x1,y1) -- (x2,y1) | | (x1,y2) -- (x2,y2)
Determining the values for the ViewportStruct depends on an app's UI and
camera implementation. At a very basic level, to set the viewport of the camera
video, update the
viewport
attribute of the CameraAvStreamManagementTrait trait with a ViewportStruct, using
the built-in setViewport function.
func setCrop(x1: UInt16, y1: UInt16, x2: UInt16, y2: UInt16) {
let viewport = Google.CameraAvStreamManagementTrait.ViewportStruct(
x1: x1,
y1: y1,
x2: x2,
y2: y2
)
Task {
do {
try await cameraAvStreamManagementTrait.update {
$0.setViewport(viewport)
}
} catch {
// Error
}
}
}
Generate a TransportOptionsStruct
Some settings require modifications to properties in a TransportOptionsStruct,
which is then passed into the transport options of a streaming connection. For
Swift, this struct needs to be generated prior to updating any properties.
Use this helper function to generate the struct for use with the following setting changes:
func getTransportOptions(
transportOptions: Google.PushAvStreamTransportTrait.TransportOptionsStruct,
wakeUpSensitivity: UInt8?,
maxEventLength: UInt32?
) async throws
-> Google.PushAvStreamTransportTrait.TransportOptionsStruct
{
var newMotionTimeControl:
Google.PushAvStreamTransportTrait.TransportMotionTriggerTimeControlStruct? = nil
if let maxEventLength {
guard let motionTimeControl = transportOptions.triggerOptions.motionTimeControl else {
throw HomeError.failedPrecondition(
// Error - cannot update max event length without motion time control
}
newMotionTimeControl =
Google.PushAvStreamTransportTrait.TransportMotionTriggerTimeControlStruct(
initialDuration: motionTimeControl.initialDuration,
augmentationDuration: motionTimeControl.augmentationDuration,
maxDuration: maxEventLength,
blindDuration: motionTimeControl.blindDuration
)
}
return Google.PushAvStreamTransportTrait.TransportOptionsStruct(
streamUsage: .recording,
videoStreamID: nil,
audioStreamID: nil,
tlsEndpointID: transportOptions.tlsEndpointID,
url: transportOptions.url,
triggerOptions: Google.PushAvStreamTransportTrait.TransportTriggerOptionsStruct(
triggerType: .motion,
motionZones: nil,
motionSensitivity: wakeUpSensitivity,
motionTimeControl: newMotionTimeControl,
maxPreRollLen: nil
),
ingestMethod: .cmafIngest,
containerOptions: Google.PushAvStreamTransportTrait.ContainerOptionsStruct(
containerType: .cmaf,
cmafContainerOptions: nil
),
expiryTime: nil
)
}
private func getRecordingConnection() async throws
-> Google.PushAvStreamTransportTrait.TransportConfigurationStruct?
{
guard let pushAvStreamTransportTrait else {
// Error - PushAvStreamTransport trait not available
return nil
}
let connections = try await pushAvStreamTransportTrait.findTransport().transportConfigurations
for connection in connections {
guard let transportOptions = connection.transportOptions,
transportOptions.streamUsage == .recording
else {
continue
}
return connection
}
return nil
}
Adjust the device wake-up sensitivity
The device's wake-up sensitivity is used to conserve battery by decreasing the range at which the device can sense activity and increasing the time to wake up after detecting that activity.
In the Home APIs, this can be set using the motionSensitivity property of the
triggerOptions in the device's transportOptions. These options are defined
within the PushAvStreamTransportTrait trait for each device.
The wake-up sensitivity can only be set to the following values:
- 1 = Low
- 5 = Medium
- 10 = High
The process for updating is to find the transport configuration for active
recording streams using the
findTransport
command, then modify the configuration with the new sensitivity value using the
modifyPushTransport
command.
The modifyPushTransport command requires the full TransportOptionsStruct to
be passed, so you must copy existing values from the current configuration
first. See Generate a TransportOptionsStruct for a
helper function to do this.
func setWakeUpSensitivity(to value: UInt8) async {
do {
let connection = try await getRecordingConnection()
guard let connection,
let transportOptions = connection.transportOptions
else {
// Error - Transport options not available
return
}
guard transportOptions.triggerOptions.motionSensitivity != nil else {
// Error - Motion sensitivity not available to be updated for this device
return
}
try await pushAvStreamTransportTrait.modifyPushTransport(
connectionID: connection.connectionID,
transportOptions: self.getTransportOptions(
transportOptions: transportOptions,
wakeUpSensitivity: value,
maxEventLength: nil
)
)
} catch {
// Error
}
}
Adjust the maximum event length
The maximum event length is the length of time the camera will record a clip for an event. Through the Home APIs this can be configured, per device, to the same lengths as through the Google Home app (GHA), in intervals of seconds:
- 10 seconds
- 15 seconds
- 30 seconds
- 60 seconds (1 minute)
- 120 seconds (2 minutes)
- 180 seconds (3 minutes)
In the Home APIs, this can be set using the motionTimeControl property of the
triggerOptions in the device's transportOptions. These options are defined
within the PushAvStreamTransportTrait trait for each device.
The process for updating is to find the transport configuration for active
recording streams using the
findTransport
command, then modify the configuration with the new event length value using the
modifyPushTransport
command.
The modifyPushTransport command requires the full TransportOptionsStruct to
be passed, so you must copy existing values from the current configuration
first. See Generate a TransportOptionsStruct for a
helper function to do this.
func setMaxEventLength(to value: UInt32) async {
do {
let connection = try await getRecordingConnection()
guard let connection,
let transportOptions = connection.transportOptions
else {
// Error - Transport options not available
return
}
guard transportOptions.triggerOptions.motionTimeControl != nil else {
// Error - Motion time control not available to be updated for this device
return
}
try await pushAvStreamTransportTrait.modifyPushTransport(
connectionID: connection.connectionID,
transportOptions: self.getTransportOptions(
transportOptions: transportOptions,
wakeUpSensitivity: nil,
maxEventLength: value
)
)
} catch {
// Error
}
}
Enable or disable analytics
Each device may individually opt-in to send detailed analytics data to the Google Home cloud (see Cloud Monitoring for Home APIs).
To enable analytics for a device, set the
analyticsEnabled)
property of the
ExtendedGeneralDiagnosticsTrait
to true. When you set analyticsEnabled,
another property, logUploadEnabled, is automatically set to true, which
allows the analytics log files to be uploaded to the Google Home cloud.
// Enable analytics
_ = try await extendedGeneralDiagnosticsTrait.update {
$0.setAnalyticsEnabled(true)
}
// Disable analytics
_ = try await extendedGeneralDiagnosticsTrait.update {
$0.setAnalyticsEnabled(false)
}
Chime settings
Various doorbell chime settings can be controlled through the Home APIs.
Change the chime sound
To change the doorbell chime sound, first get the list of chime sounds that are
installed the device, using the
installedChimeSounds
attribute of the ChimeTrait trait:
doorbellChimeTrait.attributes.installedChimeSounds?.compactMap { chimeSound in
return chimeSound.chimeID, chimeSound.name
}
Then, update the
selectedChime
attribute of the ChimeTrait trait using the built-in setSelectedChime
function:
func setDoorbellChime(chimeID: UInt8) async {
do {
_ = try await doorbellChimeTrait.update {
$0.setSelectedChime(chimeID)
}
} catch {
// Error
}
}
Use an external chime
The doorbell can be configured to use an external chime, such as a mechanical bell installed inside the home. This should be configured during doorbell installation to avoid potential damage to the external chime.
To indicate what type of external chime is installed, use
ExternalChimeType
to update the
externalChime
attribute of the ChimeTrait trait using the built-in
setExternalChime function:
// Indicate the external chime is mechanical
func setExternalChime(to value: Google.ChimeTrait.ExternalChimeType) async {
do {
_ = try await doorbellChimeTrait.update {
$0.setExternalChime(value)
}
} catch {
// Error
}
}
Change the external chime duration
The duration, in seconds, that an external chime rings can be configured through the Home APIs. If the external chime supports a chime duration, a user may want to configure this.
The value set here is dependent on the specifications of the external chime itself, and what its recommended chime duration is.
To change the external chime duration, update the
externalChimeDurationSeconds
attribute of the ChimeTrait trait using the built-in
setExternalChimeDurationSeconds function:
// Change the external chime duration
func setExternalChimeDuration(to value: UInt16) async {
do {
_ = try await doorbellChimeTrait.update {
$0.setExternalChimeDuration(value)
}
} catch {
// Error
}
}
Enable a chime theme
Some doorbells may have chimes that are only available to users for a limited time. For example, chimes specific to holidays. These are called chime themes.
To see which chime themes are available for a user, create a timebox filter
and use it to filter the results of the
getAvailableThemes
command from the
ChimeThemes
trait. This returns a list of available themes, including theme names.
The following example shows how to filter the list.
A theme is considered active if the current time is within its
start and end times (the startTimeSeconds and endTimeSeconds values,
respectively). If a start time isn't set, it's considered active from the
beginning of time. If an end time isn't set, it continues to be active
indefinitely. If both are missing, the theme is always active.
let chimeThemes = try await chimeThemeTrait.getAvailableThemes().themes
if !chimeThemes.isEmpty {
var chimeThemeSettings = []
for chimeTheme in chimeThemes {
let currentDateTime = UInt64(Date().timeIntervalSince1970)
// Only show chime themes that are active.
if chimeTheme.startTimeSeconds ?? 0 <= currentDateTime
&& chimeTheme.endTimeSeconds ?? UInt64.max >= currentDateTime
{
self.chimeThemeSettings.append(chimeTheme.name)
}
}
}
Once you have the name of the theme you want, such as Christmas, you can
select it using the setSelectedTimeboxedThemeName()
function on the ChimeThemes ChimeThemes trait.
private func setChimeTheme(to value: String) async throws {
_ = try await chimeThemeTrait.update {
$0.setSelectedTimeboxedThemeName(value)
}
}```