original source : http://www.androiddocs.com/training/wearables/apps/voice.html

사용자로부터 음성으로 data입력을 받으려는 경우

Obtaining Free-form Speech Input

In addition to using voice actions to launch activities, you can also call the system’s built-in Speech Recognizer activity to obtain speech input from users. This is useful to obtain input from users and then process it, such as doing a search or sending it as a message.

In your app, you call

startActivityForResult()

using the

ACTION_RECOGNIZE_SPEECH

action. This starts the speech recognition activity, and you can then handle the result in

onActivityResult()

.

private static final int SPEECH_REQUEST_CODE = 0;

// Create an intent that can start the Speech Recognizer activity
private void displaySpeechRecognizer() {
   Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
   intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
           RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
// Start the activity, the intent will be populated with the speech text
   startActivityForResult(intent, SPEECH_REQUEST_CODE);
}

// This callback is invoked when the Speech Recognizer returns.
// This is where you process the intent and extract the speech text from the intent.
@Override
protected void onActivityResult(int requestCode, int resultCode,
       Intent data) {
   if (requestCode == SPEECH_REQUEST_CODE && resultCode == RESULT_OK) {
       List<String> results = data.getStringArrayListExtra(
               RecognizerIntent.EXTRA_RESULTS);
       String spokenText = results.get(0);
       // Do something with spokenText
   }
   super.onActivityResult(requestCode, resultCode, data);

original source: http://www.androiddocs.com/training/wearables/notifications/pages.html

Adding Pages to a Notification

When you’d like to provide more information without requiring users to open your app on their handheld device, you can add one or more pages to the notification on the wearable. (notification에 추가적으로 정보를 전달하고 싶은 경우 pages를 이용한다.)

  1. Create the main notification (the first page) with NotificationCompat.Builder, in the way you’d like the notification to appear on a handset.
  2. Create the additional pages for the wearable with NotificationCompat.Builder.
  3. Apply the pages to the main notification with the addPage() method or add multiple pages in a Collectionwith the addPages() method.
// Create builder for the main notification
NotificationCompat.Builder notificationBuilder =
       new NotificationCompat.Builder(this)
       .setSmallIcon(R.drawable.new_message)
       .setContentTitle("Page 1")
       .setContentText("Short message")
       .setContentIntent(viewPendingIntent);

// Create a big text style for the second page
BigTextStyle secondPageStyle = new NotificationCompat.BigTextStyle();
secondPageStyle.setBigContentTitle("Page 2")
              .bigText("A lot of text...");

// Create second page notification
Notification secondPageNotification =
       new NotificationCompat.Builder(this)
       .setStyle(secondPageStyle)
       .build();

// Extend the notification builder with the second page
Notification notification = notificationBuilder
       .extend(new NotificationCompat.WearableExtender()
               .addPage(secondPageNotification))
       .build();

// Issue the notification
notificationManager =
       NotificationManagerCompat.from(this);
notificationManager.notify(notificationId, notification);

original source : http://www.androiddocs.com/design/wear/index.html

the Android Wear UI consists of two main spaces centered around the core functions of Suggest andDemand.

Suggest: The Context Stream

The context stream is a vertical list of cards.

Only one card is displayed at a time, and background photos are used to provide additional visual information.

Cards in the stream are more than simple notifications. They can be swiped horizontally to reveal additional pages.

image

Demand: The Cue Card

the cue card allows users to speak to Google. The cue card is opened by saying, “OK Google” or by tapping on the background of the home screen.

image


Other UI Features

  • The Home screen
  • Watch faces 
  • low-power Ambient Mode
  • Swiping down on the Home screen reveals the Date and Battery display. 
  • The Settings screen.
  • Full screen apps 
  • The background
  • Status indicators, showing connectivity, charging status, airplane mode, and in some watch faces a count of unread items.
  • The top ranked card in the Context Stream.

android wear의 기본 철학

기기가 스스로 context를 파악해서 사용자게 에게 필요한 정보를 먼저 제공(suggest)한다. 사용자는 단순화된 방법으로 기기에 작업을 요구한다(demand).

필요한 정보는 직관적으로 파악할수 있게 한다. 사용자의 interactive action을 최소하게 한다.

original source: https://developer.android.com/training/wearables/apps/voice.html


두종류의 voice action types

  • System-provided  이미 시스템상에서 지정된 voice action
  • App-provided  app에서 지정하거나 특정 app의 activity를 실행하는 경우

Declare System-provided Voice Actions

When users speak the voice action, your app can filter for the intent that is fired to start an activity. If you want to start a service to do something in the background, show an activity as a visual cue and start the service in the activity. Make sure to call finish() when you want to get rid of the visual cue.

<activity android:name="MyNoteActivity">
     <intent-filter>
         <action android:name="android.intent.action.SEND" />
         <category android:name="com.google.android.voicesearch.SELF_NOTE" />
     </intent-filter>
 </activity>

몇몇의 voice intent의 예시들

더 많은 voice intent를 참조하려명 

see Common intents.

Declare App-provided Voice Actions

you can start your apps directly with a “Start MyActivityName” voice action.

<application>
 <activity android:name="StartRunActivity" android:label="MyRunningApp">
     <intent-filter>
         <action android:name="android.intent.action.MAIN" />
         <category android:name="android.intent.category.LAUNCHER" />
     </intent-filter>
 </activity>
</application>

label에 해당하는 내용이 Start 다음에 들어갈 명령어에 해당한다.

Obtaining Free-form Speech Input(사용자로 부터 음성으로 입력값을 받는 방법)

private static final int SPEECH_REQUEST_CODE = 0;

// Create an intent that can start the Speech Recognizer activity
private void displaySpeechRecognizer() {
   Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
   intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
           RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
// Start the activity, the intent will be populated with the speech text
   startActivityForResult(intent, SPEECH_REQUEST_CODE);
}

// This callback is invoked when the Speech Recognizer returns.
// This is where you process the intent and extract the speech text from the intent.
@Override
protected void onActivityResult(int requestCode, int resultCode,
       Intent data) {
   if (requestCode == SPEECH_REQUEST_CODE && resultCode == RESULT_OK) {
       List<String> results = data.getStringArrayListExtra(
               RecognizerIntent.EXTRA_RESULTS);
       String spokenText = results.get(0);
       // Do something with spokenText
   }
   super.onActivityResult(requestCode, resultCode, data);
}

startActivityForResult() 를 통해 음성으로 입력값을 받을 activity를 실행한다.이때 action은 

ACTION_RECOGNIZE_SPEECH 으로 지정한다. 그리고 그 결과는 

onActivityResult() 에서 받을수 있다.