original source : https://stackoverflow.com/questions/9586032/android-difference-between-onintercepttouchevent-and-dispatchtouchevent

#1 answer

dispatchTouchEvent is actually defined on Activity, View and ViewGroup. Think of it as a controller which decides how to route the touch events.

For example, the simplest case is that of View.dispatchTouchEvent which will route the touch event to either OnTouchListener.onTouch if it’s defined or to the extension method onTouchEvent.

View, ViewGroup, Activity에 연결된 listener(위의 예시에서 OnTouchListener.onTouch)나 그 element내부에 가지고 있는 handler(위의 예시에서onTouchEvent)를 이용해 처리할수있는데 dispatchTouchEvent를 통해 어느 element가 처리할지가 결정되지 전까지 계속 이동하게(route) 된다.

===========================================================

참조사항) https://stackoverflow.com/a/12646163

The basic difference is that event handlers let the originating object itself do something in response to the event, whereas event listeners let other objects do something in response to events originating in the object.

For example: your activity has a button. If you want your activity to handle when someone touches the button, you use an event listener (by doing btn.setOnTouchListener(…)). BUT, if you want to create a specialized button (e.g. one that looks like a dog and barks when touched), you can create a subclass of Button and implement its event handler, onTouchEvent(…). In this latter case, the button itself will handle its touch event.

setOnTouchListener 를 통해 이벤트를 처리할 listener를 연결할수 있는데 이벤트를 처리할 listener가 view 그자체에 있지 않고 다른 element에 있을수 있다. 즉 element내에서 직접처리하지 않고 외부에 있는 경우 listener라고 한다. element내부에서 처리할수 있는 경우는이를 handler라고 하고 onTouchEvent()가 그 예시가 될수 있다. 

===========================================================

For ViewGroup.dispatchTouchEvent things are way more complicated. It needs to figure out which one of its child views should get the event (by calling child.dispatchTouchEvent). This is basically a hit testing algorithm where you figure out which child view’s bounding rectangle contains the touch point coordinates.

But before it can dispatch the event to the appropriate child view, the parent can spy and/or intercept the event all together. This is what onInterceptTouchEvent is there for. So it calls this method first before doing the hit testing and if the event was hijacked (by returning true from onInterceptTouchEvent) it sends a ACTION_CANCEL to the child views so they can abandon their touch event processing (from previous touch events) and from then onwards all touch events at the parent level are dispatched to onTouchListener.onTouch (if defined) or onTouchEvent(). Also in that case, onInterceptTouchEvent is never called again.

Would you even want to override [Activity|ViewGroup|View].dispatchTouchEvent? Unless you are doing some custom routing you probably should not.

The main extension methods are ViewGroup.onInterceptTouchEvent if you want to spy and/or intercept touch event at the parent level and View.onTouchListener/View.onTouchEvent for main event handling.

All in all its overly complicated design imo but android apis lean more towards flexibility than simplicity.

#2 answer

Because this is the first result on Google. I want to share with you a great Talk by Dave Smith on Youtube: Mastering the Android Touch System (이 동영상은 1시간분량)and the slides are available here. It gave me a good deep understanding about the Android Touch System:

How the Activity handles touch:

  • Activity.dispatchTouchEvent()
  • Always first to be called
  • Sends event to root view attached to Window
  • onTouchEvent()
  • Called if no views consume the event
  • Always last to be called

How the View handles touch:

  • View.dispatchTouchEvent()
  • Sends event to listener first, if exists
  • If not consumed, processes the touch itself
  • View.OnTouchListener.onTouch()
  • View.onTouchEvent()

How a ViewGroup handles touch:

  • ViewGroup.dispatchTouchEvent()
  • Intercepted events jump over the child step
  • onInterceptTouchEvent()
  • For each child view (in reverse order they were added)
  • If no children handles the event, the listener gets a chance
  • If there is no listener, or its not handled
  • Check if it should supersede children
  • Passes ACTION_CANCEL to active child
  • If it returns true once, the ViewGroup consumes all subsequent events
  • If touch is relevant (inside view), child.dispatchTouchEvent()
  • If it is not handled by a previous, dispatch to next view
  • OnTouchListener.onTouch()
  • onTouchEvent()

He also provides example code of custom touch on github.com/devunwired/.

Answer: Basically the dispatchTouchEvent() is called on every View layer to determine if a View is interested in an ongoing gesture. In a ViewGroup the ViewGroup has the ability to steal the touch events in his dispatchTouchEvent()-method, before it would call dispatchTouchEvent() on the children. The ViewGroup would only stop the dispatching if the ViewGroup onInterceptTouchEvent()-method returns true. The difference is that dispatchTouchEvent()is dispatching MotionEvents and onInterceptTouchEvent tells if it should intercept (not dispatching the MotionEvent to children) or not (dispatching to children).

You could imagine the code of a ViewGroup doing more-or-less this (very simplified):

public boolean dispatchTouchEvent(MotionEvent ev) {
    if(!onInterceptTouchEvent()){
        for(View child : children){
            if(child.dispatchTouchEvent(ev))
                return true;
        }
    }
    return super.dispatchTouchEvent(ev);
}

#3 answer 

image

original source : http://www.androiddocs.com/training/wearables/apps/voice.html

사용자로부터 음성으로 data입력을 받으려는 경우

Obtaining Free-form Speech Input

In addition to using voice actions to launch activities, you can also call the system’s built-in Speech Recognizer activity to obtain speech input from users. This is useful to obtain input from users and then process it, such as doing a search or sending it as a message.

In your app, you call

startActivityForResult()

using the

ACTION_RECOGNIZE_SPEECH

action. This starts the speech recognition activity, and you can then handle the result in

onActivityResult()

.

private static final int SPEECH_REQUEST_CODE = 0;

// Create an intent that can start the Speech Recognizer activity
private void displaySpeechRecognizer() {
   Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
   intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
           RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
// Start the activity, the intent will be populated with the speech text
   startActivityForResult(intent, SPEECH_REQUEST_CODE);
}

// This callback is invoked when the Speech Recognizer returns.
// This is where you process the intent and extract the speech text from the intent.
@Override
protected void onActivityResult(int requestCode, int resultCode,
       Intent data) {
   if (requestCode == SPEECH_REQUEST_CODE && resultCode == RESULT_OK) {
       List<String> results = data.getStringArrayListExtra(
               RecognizerIntent.EXTRA_RESULTS);
       String spokenText = results.get(0);
       // Do something with spokenText
   }
   super.onActivityResult(requestCode, resultCode, data);

original source : http://www.androiddocs.com/training/wearables/apps/voice.html

사용자로부터 음성으로 data입력을 받으려는 경우

Obtaining Free-form Speech Input

In addition to using voice actions to launch activities, you can also call the system’s built-in Speech Recognizer activity to obtain speech input from users. This is useful to obtain input from users and then process it, such as doing a search or sending it as a message.

In your app, you call

startActivityForResult()

using the

ACTION_RECOGNIZE_SPEECH

action. This starts the speech recognition activity, and you can then handle the result in

onActivityResult()

.

private static final int SPEECH_REQUEST_CODE = 0;

// Create an intent that can start the Speech Recognizer activity
private void displaySpeechRecognizer() {
   Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
   intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
           RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
// Start the activity, the intent will be populated with the speech text
   startActivityForResult(intent, SPEECH_REQUEST_CODE);
}

// This callback is invoked when the Speech Recognizer returns.
// This is where you process the intent and extract the speech text from the intent.
@Override
protected void onActivityResult(int requestCode, int resultCode,
       Intent data) {
   if (requestCode == SPEECH_REQUEST_CODE && resultCode == RESULT_OK) {
       List<String> results = data.getStringArrayListExtra(
               RecognizerIntent.EXTRA_RESULTS);
       String spokenText = results.get(0);
       // Do something with spokenText
   }
   super.onActivityResult(requestCode, resultCode, data);

original source :http://www.androiddocs.com/training/wearables/notifications/voice-input.html

Receiving Voice Input in a Notification

If you have handheld notifications that include an action to input text, such as reply to an email, it should normally launch an activity on the handheld device to input the text. However, when your notification appears on a wearable, there is no keyboard input, so you can let users dictate a reply or provide pre-defined text messages using RemoteInput.

When users reply with voice or select one of the available messages, the system attaches the text response to the Intentyou specified for the notification action and sends that intent to your handheld app.

Define the Voice Input

To create an action that supports voice input, create an instance of

RemoteInput.Builder

that you can add to your notification action. This class’s constructor accepts a string that the system uses as the key for the voice input, which you’ll later use to retrieve the text of the input in your handheld app.For example, here’s how to create a

RemoteInput

object that provides a custom label for the voice input prompt:

// Key for the string that's delivered in the action's intent
private static final String EXTRA_VOICE_REPLY = "extra_voice_reply";

String replyLabel = getResources().getString(R.string.reply_label);

RemoteInput remoteInput = new RemoteInput.Builder(EXTRA_VOICE_REPLY)
       .setLabel(replyLabel)
       .build();



Add Pre-defined Text Responses

setChoices()

res/values/strings.xml

<?xml version="1.0" encoding="utf-8"?>
<resources>
   <string-array name="reply_choices">
       <item>Yes</item>
       <item>No</item>
       <item>Maybe</item>
   </string-array>
</resources>
public static final String EXTRA_VOICE_REPLY = "extra_voice_reply";
...
String replyLabel = getResources().getString(R.string.reply_label);
String[] replyChoices = getResources().getStringArray(R.array.reply_choices);

RemoteInput remoteInput = new RemoteInput.Builder(EXTRA_VOICE_REPLY)
       .setLabel(replyLabel)
       .setChoices(replyChoices)
       .build();



Add the Voice Input as a Notification Action

// Create an intent for the reply action
Intent replyIntent = new Intent(this, ReplyActivity.class);
PendingIntent replyPendingIntent =
       PendingIntent.getActivity(this, 0, replyIntent,
               PendingIntent.FLAG_UPDATE_CURRENT);

// Create the reply action and add the remote input
NotificationCompat.Action action =
       new NotificationCompat.Action.Builder(R.drawable.ic_reply_icon,
               getString(R.string.label), replyPendingIntent)
               .addRemoteInput(remoteInput)
               .build();

// Build the notification and add the action via WearableExtender
Notification notification =
       new NotificationCompat.Builder(mContext)
               .setSmallIcon(R.drawable.ic_message)
               .setContentTitle(getString(R.string.title))
               .setContentText(getString(R.string.content))
               .extend(new WearableExtender().addAction(action))
               .build();

// Issue the notification
NotificationManagerCompat notificationManager =
       NotificationManagerCompat.from(mContext);
notificationManager.notify(notificationId, notification);

Receiving the Voice Input as a String

(notification은 sync와는 달리 일방통행이라고 하지만 reply action intent를 통해 reply message를 전달 받을수 있다. )

To receive the user’s transcribed message in the activity you declared in the reply action’s intent, call

getResultsFromIntent()

, passing in the “Reply” action’s intent. This method returns a

Bundle

that contains the text response. You can then query the

Bundle

to obtain the response.

Note:

Do not use

Intent.getExtras()

to obtain the voice result, because the voice input is stored as

ClipData

. The

getResultsFromIntent()

method provides a convenient way to receive a character sequence without having to process the

ClipData

yourself.

/**
* Obtain the intent that started this activity by calling
* Activity.getIntent() and pass it into this method to
* get the associated voice input string.
*/

private CharSequence getMessageText(Intent intent) {
   Bundle remoteInput = RemoteInput.getResultsFromIntent(intent);
   if (remoteInput != null) {
       return remoteInput.getCharSequence(EXTRA_VOICE_REPLY);
   }
   return null;
}

original source :http://www.androiddocs.com/training/wearables/notifications/voice-input.html

Receiving Voice Input in a Notification

If you have handheld notifications that include an action to input text, such as reply to an email, it should normally launch an activity on the handheld device to input the text. However, when your notification appears on a wearable, there is no keyboard input, so you can let users dictate a reply or provide pre-defined text messages using RemoteInput.

When users reply with voice or select one of the available messages, the system attaches the text response to the Intentyou specified for the notification action and sends that intent to your handheld app.

Define the Voice Input

To create an action that supports voice input, create an instance of

RemoteInput.Builder

that you can add to your notification action. This class’s constructor accepts a string that the system uses as the key for the voice input, which you’ll later use to retrieve the text of the input in your handheld app.For example, here’s how to create a

RemoteInput

object that provides a custom label for the voice input prompt:

// Key for the string that's delivered in the action's intent
private static final String EXTRA_VOICE_REPLY = "extra_voice_reply";

String replyLabel = getResources().getString(R.string.reply_label);

RemoteInput remoteInput = new RemoteInput.Builder(EXTRA_VOICE_REPLY)
       .setLabel(replyLabel)
       .build();



Add Pre-defined Text Responses

setChoices()

res/values/strings.xml

<?xml version="1.0" encoding="utf-8"?>
<resources>
   <string-array name="reply_choices">
       <item>Yes</item>
       <item>No</item>
       <item>Maybe</item>
   </string-array>
</resources>
public static final String EXTRA_VOICE_REPLY = "extra_voice_reply";
...
String replyLabel = getResources().getString(R.string.reply_label);
String[] replyChoices = getResources().getStringArray(R.array.reply_choices);

RemoteInput remoteInput = new RemoteInput.Builder(EXTRA_VOICE_REPLY)
       .setLabel(replyLabel)
       .setChoices(replyChoices)
       .build();



Add the Voice Input as a Notification Action

// Create an intent for the reply action
Intent replyIntent = new Intent(this, ReplyActivity.class);
PendingIntent replyPendingIntent =
       PendingIntent.getActivity(this, 0, replyIntent,
               PendingIntent.FLAG_UPDATE_CURRENT);

// Create the reply action and add the remote input
NotificationCompat.Action action =
       new NotificationCompat.Action.Builder(R.drawable.ic_reply_icon,
               getString(R.string.label), replyPendingIntent)
               .addRemoteInput(remoteInput)
               .build();

// Build the notification and add the action via WearableExtender
Notification notification =
       new NotificationCompat.Builder(mContext)
               .setSmallIcon(R.drawable.ic_message)
               .setContentTitle(getString(R.string.title))
               .setContentText(getString(R.string.content))
               .extend(new WearableExtender().addAction(action))
               .build();

// Issue the notification
NotificationManagerCompat notificationManager =
       NotificationManagerCompat.from(mContext);
notificationManager.notify(notificationId, notification);

Receiving the Voice Input as a String

(notification은 sync와는 달리 일방통행이라고 하지만 reply action intent를 통해 reply message를 전달 받을수 있다. )

To receive the user’s transcribed message in the activity you declared in the reply action’s intent, call

getResultsFromIntent()

, passing in the “Reply” action’s intent. This method returns a

Bundle

that contains the text response. You can then query the

Bundle

to obtain the response.

Note:

Do not use

Intent.getExtras()

to obtain the voice result, because the voice input is stored as

ClipData

. The

getResultsFromIntent()

method provides a convenient way to receive a character sequence without having to process the

ClipData

yourself.

/**
* Obtain the intent that started this activity by calling
* Activity.getIntent() and pass it into this method to
* get the associated voice input string.
*/

private CharSequence getMessageText(Intent intent) {
   Bundle remoteInput = RemoteInput.getResultsFromIntent(intent);
   if (remoteInput != null) {
       return remoteInput.getCharSequence(EXTRA_VOICE_REPLY);
   }
   return null;
}

original source: https://developer.android.com/training/wearables/apps/voice.html


두종류의 voice action types

  • System-provided  이미 시스템상에서 지정된 voice action
  • App-provided  app에서 지정하거나 특정 app의 activity를 실행하는 경우

Declare System-provided Voice Actions

When users speak the voice action, your app can filter for the intent that is fired to start an activity. If you want to start a service to do something in the background, show an activity as a visual cue and start the service in the activity. Make sure to call finish() when you want to get rid of the visual cue.

<activity android:name="MyNoteActivity">
     <intent-filter>
         <action android:name="android.intent.action.SEND" />
         <category android:name="com.google.android.voicesearch.SELF_NOTE" />
     </intent-filter>
 </activity>

몇몇의 voice intent의 예시들

더 많은 voice intent를 참조하려명 

see Common intents.

Declare App-provided Voice Actions

you can start your apps directly with a “Start MyActivityName” voice action.

<application>
 <activity android:name="StartRunActivity" android:label="MyRunningApp">
     <intent-filter>
         <action android:name="android.intent.action.MAIN" />
         <category android:name="android.intent.category.LAUNCHER" />
     </intent-filter>
 </activity>
</application>

label에 해당하는 내용이 Start 다음에 들어갈 명령어에 해당한다.

Obtaining Free-form Speech Input(사용자로 부터 음성으로 입력값을 받는 방법)

private static final int SPEECH_REQUEST_CODE = 0;

// Create an intent that can start the Speech Recognizer activity
private void displaySpeechRecognizer() {
   Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
   intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
           RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
// Start the activity, the intent will be populated with the speech text
   startActivityForResult(intent, SPEECH_REQUEST_CODE);
}

// This callback is invoked when the Speech Recognizer returns.
// This is where you process the intent and extract the speech text from the intent.
@Override
protected void onActivityResult(int requestCode, int resultCode,
       Intent data) {
   if (requestCode == SPEECH_REQUEST_CODE && resultCode == RESULT_OK) {
       List<String> results = data.getStringArrayListExtra(
               RecognizerIntent.EXTRA_RESULTS);
       String spokenText = results.get(0);
       // Do something with spokenText
   }
   super.onActivityResult(requestCode, resultCode, data);
}

startActivityForResult() 를 통해 음성으로 입력값을 받을 activity를 실행한다.이때 action은 

ACTION_RECOGNIZE_SPEECH 으로 지정한다. 그리고 그 결과는 

onActivityResult() 에서 받을수 있다.

original source: https://developer.android.com/training/wearables/apps/voice.html


두종류의 voice action types

  • System-provided  이미 시스템상에서 지정된 voice action
  • App-provided  app에서 지정하거나 특정 app의 activity를 실행하는 경우

Declare System-provided Voice Actions

When users speak the voice action, your app can filter for the intent that is fired to start an activity. If you want to start a service to do something in the background, show an activity as a visual cue and start the service in the activity. Make sure to call finish() when you want to get rid of the visual cue.

<activity android:name="MyNoteActivity">
     <intent-filter>
         <action android:name="android.intent.action.SEND" />
         <category android:name="com.google.android.voicesearch.SELF_NOTE" />
     </intent-filter>
 </activity>

몇몇의 voice intent의 예시들

더 많은 voice intent를 참조하려명 

see Common intents.

Declare App-provided Voice Actions

you can start your apps directly with a “Start MyActivityName” voice action.

<application>
 <activity android:name="StartRunActivity" android:label="MyRunningApp">
     <intent-filter>
         <action android:name="android.intent.action.MAIN" />
         <category android:name="android.intent.category.LAUNCHER" />
     </intent-filter>
 </activity>
</application>

label에 해당하는 내용이 Start 다음에 들어갈 명령어에 해당한다.

Obtaining Free-form Speech Input(사용자로 부터 음성으로 입력값을 받는 방법)

private static final int SPEECH_REQUEST_CODE = 0;

// Create an intent that can start the Speech Recognizer activity
private void displaySpeechRecognizer() {
   Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
   intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
           RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
// Start the activity, the intent will be populated with the speech text
   startActivityForResult(intent, SPEECH_REQUEST_CODE);
}

// This callback is invoked when the Speech Recognizer returns.
// This is where you process the intent and extract the speech text from the intent.
@Override
protected void onActivityResult(int requestCode, int resultCode,
       Intent data) {
   if (requestCode == SPEECH_REQUEST_CODE && resultCode == RESULT_OK) {
       List<String> results = data.getStringArrayListExtra(
               RecognizerIntent.EXTRA_RESULTS);
       String spokenText = results.get(0);
       // Do something with spokenText
   }
   super.onActivityResult(requestCode, resultCode, data);
}

startActivityForResult() 를 통해 음성으로 입력값을 받을 activity를 실행한다.이때 action은 

ACTION_RECOGNIZE_SPEECH 으로 지정한다. 그리고 그 결과는 

onActivityResult() 에서 받을수 있다.