Can we have an installation with our own specific jargon?

Yes, if the Speech-to-Text Engine you want supports this. Please contact us.

What if we don’t want any data leaving our own ICT infrastructure?

If you are dealing with sensitive information, you may ask for an in-house installation of M>D.  We use state of the art Speech-to-Text Engines, eg Google speech API, Nuance Dragon. The supplier of the Speech-to-Text Engine should also be willing to set up a confined server. Nuance has a Dragon SDK available. For other Speech-to-Text Engines, it might involve diplomacy.

Please contact us for a more in-depth discussion

Can we select the speech-to-text engine (StTE) of our choice?

Right now, M>D only supports one StTE. But we could make a different StTE available upon request. You could choose an StTE per meeting. Or even try multiple engines to find the optimum AutoTranscript for your situation.

Please contact us to discuss your needs.

 

How about dialects?

Speech recognition works best when speakers avoid dialects.  The quality of the AutoTranscript   also depends upon the coherence of the spoken text. You get the best transcript when making grammatically correct sentences.

Can we mix languages per meeting?

Yes, some participants may speak English, while others for instance Dutch. When you install the App you select a language.  For a specific meeting you may switch to another language. The App reverts to the last selected language.

But you can’t alternate between  languages during a meeting. Once you select Dutch and you start the meeting, you have to stick to Dutch. For the moment, that is, after all things are moving quickly.

 

What languages does M>D support?

At the moment Dutch, Flemish and English. In the near future we will expand the number of languages.

While editing an AutoTranscript we may hear the original recordings, where we can mute each individual recording. Is any audio-processing done to these recordings?

No, they might be converted to MP3 format, but there’s no audio enhancement involved. You hear the original recording as is. The audio files that are fed into the M>D  AutoTranscribing engine are processed to get the best possible transcript. M>D uses several optimization strategies and algorithms, really clever one, but these are the tricks of our trade.

What is the best microphone?

For a fixed setting in a courtroom or conference room, we  – MeetingtoDocs – choose the microphone.  We supply, together  with the M>D conference station, excellent but expensive microphones with extreme-directional sensitivity.  This minimizes the possibility of crosstalk, also when sitting quite close together.

In all other situations the story is somewhat more complex.

You might well get good results with low-cost, crappy in-ear headsets.  Put the wire with the mike in your ear – let the other one hang  – and make sure that the mike is facing your mouth.

Lavalier microphones – the one you clip to your tie or colour – might also work.

But good, more expensive microphones will sometimes, after a period of silence by the primary speaker,  automatically enhance the sensitivity and may start picking up the neighbours’  (a secondary) voice. We therefore advise you to sit at least 1,5m apart from each other and away from the neighbours.

Please check our on-line shop for the best choice of microphones.

What is the right setting for an interview?

A quiet spot is better than a noisy café. If you are in a noisy surrounding, you may use noise suppressing mikes. This type of advanced microphones are excellent for blocking  general background noise, but they are often so clever that they tend to pick up secondary voices, when the primary voice is silent for some time.

So make sure that you are sitting well apart from each other, say 1,5 m,  and not too close to the neighbours.

What does the M>D editor look like?

When you open the M>D editor you will have the AutoTranscript, the original audio and the part of the screen where you may edit the AutoTranscript.

All three parts of the screen are synchronized: when you scroll in one element the other two will automatically follow.

What makes the quality of a AutoTranscript?

It all starts with a good recording. The M>D system makes individual recordings, that will result  in the best possible AutoTranscript.

The quality of the microphone is also a key factor, especially in noisy surroundings or when participants sit close to each other. The mic might pick up the voice of the person in the direct vicinity .   

M>D has selected some excellent headsets and table top microphones  with superb noise cancellation and directional sensitivity.

Although we record individual voices, quality drops when many people are talking simultaneously, 

The AutoTranscript  is better when participants are talking in a normal tone of voice and  in grammatically correct sentences.

Names might not always come out correctly. The same goes for high-tech language or trade-specific teems.

One thing is evident: correcting and editing the AutoTranscript is much less work than doing a transcript from scratch by hand.

You may use the AutoTranscript to compose a summary, without correcting it to a 100% reliable transcript.