Users need to manually enable the feature in iOS 9. If you do enable it, Apple promises that "in no case is the device recording what the user says or sending that information to Apple before the feature is triggered."
Audio from the microphone is continuously compared against the 'personalized' model/pattern of how you said 'Hey Siri' during setup. It's also compared against the 'general' model of how the iPhone thinks it should sound. Apple requires a match to both models to trigger the feature. Until both models are matched, no audio is sent off your iPhone.
“The “listening” audio, which will be continuously overwritten, will be used to improve Siri’s response time in instances where the user activates Siri,” says Apple. The keyword there being ‘activates Siri.’ Until you activate it, the patterns are matched locally, and the buffer of sound being monitored (from what I understand, just a few seconds) is being erased, un-sent and un-used — and unable to be retrieved at any point in the future.
Apple also confirmed that “If a user chooses to turn off Siri, Apple will delete the User Data associated with the user’s Siri identifier, and the learning process will start all over again.”
More details on how the feature works can be found at the link below...