MakeMusic
SmartMusic Finale Garritan MusicXML

Sound design rationale: top-level categories

Moderator: Michael Good

Sound design rationale: top-level categories

Postby Michael Good » Sun Mar 20, 2011 2:43 pm

Hi Joe, Anthony, and Jools,

Thank you very much for your feedback so far. I think that Joe's request for more details on the design rationale makes a lot of sense. I think this can also address Jools and Anthony's points. Hopefully it can also serve as source material for the specification.

First, what problems are we trying to solve with these new features? We are trying to add a categorical representation of timbre into MusicXML 3.0. We are doing this for two main reasons:

- To help improve score exchange between musicians who use different software applications and sound equipment by providing greater accuracy regarding playback sounds.

- To help people reading a MusicXML file to understand what sounds are intended for playback. The readability of MusicXML files helps software developers and also is a worthy goal on its own, considering the archival nature of MusicXML files.

The traditional approach to solving this problem - both in musicology and in computer notation software - has been through taxonomies and categories. As in most aspects of MusicXML, people contribute both privately off-list and publicly on-list, so I cannot discuss everything that has been considered in this design. List members have seen the Hornbostel-Sachs taxonomy discussed here, as well as references to the taxonomies from the Sibelius and capella notation editors. All of these were taken into consideration during the design.

The first problem we encounter is that categories and taxonomies are a limited method of representation. Often it is better to characterize phenomena along a variety of dimensions. General categorization systems like Hornbostel-Sachs also lack context sensitivity: the purpose for which a categorization is used helps determines what makes categories more or less useful. In my past work as a usability engineer I co-authored a paper about this, influenced by my colleague's familiarity with S. C. Pepper's book on World Hypotheses. (You can find that paper online at <http://michaelgood.info/publications/usability/interface-style-and-eclecticism/>
>.)

Given the state of the art, though, a categorization system provides the most practical solution at this time. The design should not exclude the development of a more dimensional markup of timbre in the future. I think we could add such a description to the score-instrument element, just as we have added the instrument-sound and virtual-instrument elements in version 3.0.

So how should we approach the design of the top-level categories? Given our two goals, we want top-level categories that 1) group similar sounds together and 2) reflect instrument categories as commonly understood by musicians. Hornbostel-Sachs does a wonderful job for use in ethnomusicology and related areas. However, its categorization by sound-producing mechanism contrasts with how most musicians think about things, which is either by the performing mechanism or materials.

Using a combination of these two factors - performing mechanism and materials - leads to the top-level categories we have chosen. That is why these categories look more like Sibelius's categories than capella's categories. Capella leans more towards the Hornbostel-Sachs approach, in part because they believe it works better for sound substitution. We are not trying to support sound substitution in our design. We are instead trying to label instrument sounds for ease of identification by software and people. Different goals will lead to different category schemes.

In most cases performing mechanism and materials are strongly correlated. I used the more common terminology when possible, such as brass, wind, and strings. Brass is a commonly understood category for common Western music notation, even though the category (as of beta 1) also includes the alphorn and didgeridoo. I decided not to have a top-level percussion category, but to promote the different percussion categories to their own top levels.

The playing techniques that we are excluding from the instrument-sound list apply to instruments that have multiple techniques possible. You can use pizzicato for a normally bowed instrument in the strings category, you can bow a normally struck instrument like those in different percussion categories, and so on. Different percussion beaters and striking positions for the same instrument are not included in the instrument-sound list. The same is true for a variety of pedal and amplification effects for guitars and other electric and electronic instruments. We also exclude solo/ensemble distinctions since we already have separate elements for that.

One place where performing mechanism and materials fail to guide us effectively is with electronic instruments and synthetic sounds. Here the quality of sound is more useful than performance mechanism - dominated by keyboards and laptops now, with tablets and phones looking to come on strong in the future. In this case, we follow Sibelius's lead of assigning electronic sounds that are modeled on acoustic sounds into the appropriate acoustic category.

Sibelius has a lot more classification of synthetic sounds in the synth category than was included in our Beta 1 draft. I left most of it out because I did not find it very understandable, even given my (decades-old and thus outdated) experience in sound synthesis during my college days at the MIT Experimental Music Studio. I understand how it helps Sibelius internally as far as sound substitution, but I am not convinced of its effectiveness for musician-friendly sound identification. In Beta 1 we basically try to support the General MIDI Synth and Sound Effect sounds, either through their own synth categories or assignment to an appropriate acoustic category.

We would be very interested in finding other categorization systems for synthetic sounds that could be helpful here. Hornbostel-Sachs does not help much in this area. Do any of you have references that we can use, either in publications or in software?

Best regards,

Michael Good Recordare LLC
Michael Good
VP of MusicXML Technologies
MakeMusic, Inc.
User avatar
Michael Good
 
Posts: 2197
Joined: March, 2014
Reputation: 0

Re: Sound design rationale: top-level categories

Postby Henry Howey » Sun Mar 20, 2011 10:18 pm

One other note. Will there be a notation to indicate a SCALA temperament in the setup?


Michael Good wrote:Hi Joe, Anthony, and Jools,

Thank you very much for your feedback so far. I think that Joe's request for more details on the design rationale makes a lot of sense. I think this can also address Jools and Anthony's points. Hopefully it can also serve as source material for the specification.

First, what problems are we trying to solve with these new features? We are trying to add a categorical representation of timbre into MusicXML 3.0. We are doing this for two main reasons:

- To help improve score exchange between musicians who use different software applications and sound equipment by providing greater accuracy regarding playback sounds.

- To help people reading a MusicXML file to understand what sounds are intended for playback. The readability of MusicXML files helps software developers and also is a worthy goal on its own, considering the archival nature of MusicXML files.

The traditional approach to solving this problem - both in musicology and in computer notation software - has been through taxonomies and categories. As in most aspects of MusicXML, people contribute both privately off-list and publicly on-list, so I cannot discuss everything that has been considered in this design. List members have seen the Hornbostel-Sachs taxonomy discussed here, as well as references to the taxonomies from the Sibelius and capella notation editors. All of these were taken into consideration during the design.

The first problem we encounter is that categories and taxonomies are a limited method of representation. Often it is better to characterize phenomena along a variety of dimensions. General categorization systems like Hornbostel-Sachs also lack context sensitivity: the purpose for which a categorization is used helps determines what makes categories more or less useful. In my past work as a usability engineer I co-authored a paper about this, influenced by my colleague's familiarity with S. C. Pepper's book on World Hypotheses. (You can find that paper online at <http://michaelgood.info/publications/usability/interface-style-and-eclecticism/>
> .)

Given the state of the art, though, a categorization system provides the most practical solution at this time. The design should not exclude the development of a more dimensional markup of timbre in the future. I think we could add such a description to the score-instrument element, just as we have added the instrument-sound and virtual-instrument elements in version 3.0.

So how should we approach the design of the top-level categories? Given our two goals, we want top-level categories that 1) group similar sounds together and 2) reflect instrument categories as commonly understood by musicians. Hornbostel-Sachs does a wonderful job for use in ethnomusicology and related areas. However, its categorization by sound-producing mechanism contrasts with how most musicians think about things, which is either by the performing mechanism or materials.

Using a combination of these two factors - performing mechanism and materials - leads to the top-level categories we have chosen. That is why these categories look more like Sibelius's categories than capella's categories. Capella leans more towards the Hornbostel-Sachs approach, in part because they believe it works better for sound substitution. We are not trying to support sound substitution in our design. We are instead trying to label instrument sounds for ease of identification by software and people. Different goals will lead to different category schemes.

In most cases performing mechanism and materials are strongly correlated. I used the more common terminology when possible, such as brass, wind, and strings. Brass is a commonly understood category for common Western music notation, even though the category (as of beta 1) also includes the alphorn and didgeridoo. I decided not to have a top-level percussion category, but to promote the different percussion categories to their own top levels.

The playing techniques that we are excluding from the instrument-sound list apply to instruments that have multiple techniques possible. You can use pizzicato for a normally bowed instrument in the strings category, you can bow a normally struck instrument like those in different percussion categories, and so on. Different percussion beaters and striking positions for the same instrument are not included in the instrument-sound list. The same is true for a variety of pedal and amplification effects for guitars and other electric and electronic instruments. We also exclude solo/ensemble distinctions since we already have separate elements for that.

One place where performing mechanism and materials fail to guide us effectively is with electronic instruments and synthetic sounds. Here the quality of sound is more useful than performance mechanism - dominated by keyboards and laptops now, with tablets and phones looking to come on strong in the future. In this case, we follow Sibelius's lead of assigning electronic sounds that are modeled on acoustic sounds into the appropriate acoustic category.

Sibelius has a lot more classification of synthetic sounds in the synth category than was included in our Beta 1 draft. I left most of it out because I did not find it very understandable, even given my (decades-old and thus outdated) experience in sound synthesis during my college days at the MIT Experimental Music Studio. I understand how it helps Sibelius internally as far as sound substitution, but I am not convinced of its effectiveness for musician-friendly sound identification. In Beta 1 we basically try to support the General MIDI Synth and Sound Effect sounds, either through their own synth categories or assignment to an appropriate acoustic category.

We would be very interested in finding other categorization systems for synthetic sounds that could be helpful here. Hornbostel-Sachs does not help much in this area. Do any of you have references that we can use, either in publications or in software?

Best regards,

Michael Good Recordare LLC
Henry Howey
 
Posts: 26
Joined: March, 2014
Reputation: 0

RE: Sound design rationale: top-level categories

Postby Michael Good » Sun Mar 20, 2011 11:26 pm

Hello Henry,

Thank you for the suggestion. At some point we want to explore adding a higher-level representation of temperament to MusicXML. However, that is outside the scope of what we will be doing for MusicXML 3.0. In the meantime, an application could add Scala-compatible information to a MusicXML file using XML processing instructions.

Note that you can specify exact alterations for each note with the alter element, so precise tunings can be specified in a MusicXML file on a note-by-note basis. That has been true since version 1.0. This is an important feature for making use of the new microtonal and Turkish accidentals added in MusicXML 3.0.

Are there particular applications already using Scala information that you would like to see better supported by MusicXML software in the future?

Best regards,

Michael Good Recordare LLC


One other note. Will there be a notation to indicate a SCALA temperament in the setup?
Michael Good
VP of MusicXML Technologies
MakeMusic, Inc.
User avatar
Michael Good
 
Posts: 2197
Joined: March, 2014
Reputation: 0

RE: Sound design rationale: top-level categories

Postby Jools Lewthwaite » Mon Mar 21, 2011 11:33 am

Hi Michael, thx for this detailed answer.

Except I'm afraid it seems to raise so many more questions than it answers.

Nowhere has the rationale behind calling classifying a didjeredoo as 'brass' been explained, you have not addressed even the 'bad naming ' issue.

How is the readability isuue addressed when 'kazoo' is 'brass'?

Your thinking appears muddied. Is the material of the instrument important or not important to what you are trying to do? I just simply cannot follow what you are trying to say here.

Perhaps you should start by actually telling us what definition of 'timbre' you happen to be using, and given the well known psychoacoustic fact that instruments are recognized at least as much by their attack/sustain profile as the admixture of partials why you have only focussed on this aspect?

What is your particular taxonomic heirarchy 'more accurate' than? (quoted from your text below)

I am not a 'patch' expert but can we assume that the inclusion of 'goblin' points to some basis in Yamaha XG? (Someone who does have more patch knowledge than me said immediately 'looks like Yamaha XG') Is it your intention to promote
'goblin' over all other world instruments outside western classical era for which your taxonomy is woefully inadequate? Is it true to say that some of the offline discussion has basically just been the adoption of this standard? Did you consider this standard? If so, why dont you mention it, if not, what is a'goblin'?

You claim that taxonomies (hierarchies) are limited yet that is exactly what you have produced.

You claim that you need more than 1 dimension yet this dimensional product is far from apparent in your design. I suggested in my post that instrument sound mght be seen as a product of instrument type crossed with playing technique. To allow identity the playing technique is produced in full then attached with restrictions to each node on the instrument tree - voila - product space.

You suggest Hornberg-Sachs has issues for you - indeed that may be the case - I am not particularly championing one ontology. Nonetheless, it is in fact your definition of 'brass' as 'produced by lip vibration' which prompted this suggestion, since the method of sound production is the top level of Hornberg-Sachs. I haven't the faintest idea what the problem with 'context free
' is with Hornberg-Sachs - presumably one would add the additional context via one of the other dimensions you were talking about.

That will probably do for now except,

Finally - as a mathematician I have always treated 'arguments from authority' with the deepest suspicion. In this case - having read the discussional article you referenced - I still fail to even see where the authority actually resides. I strongly suggest avoiding references to various skills and experience from all posters on this (or any technical) forum - one either has a coherent answer or one doesn't -this applies whether you are a world expert or an infant school kid, but generally speaking, those who rely on them being an 'authority' generally don't have a coherent answer, because commonsense says they would just give it if they did.

Jools

Michael Good wrote:Hi Joe, Anthony, and Jools,

Thank you very much for your feedback so far. I think that Joe's request for more details on the design rationale makes a lot of sense. I think this can also address Jools and Anthony's points. Hopefully it can also serve as source material for the specification.

First, what problems are we trying to solve with these new features? We are trying to add a categorical representation of timbre into MusicXML 3.0. We are doing this for two main reasons:

- To help improve score exchange between musicians who use different software applications and sound equipment by providing greater accuracy regarding playback sounds.

- To help people reading a MusicXML file to understand what sounds are intended for playback. The readability of MusicXML files helps software developers and also is a worthy goal on its own, considering the archival nature of MusicXML files.

The traditional approach to solving this problem - both in musicology and in computer notation software - has been through taxonomies and categories. As in most aspects of MusicXML, people contribute both privately off-list and publicly on-list, so I cannot discuss everything that has been considered in this design. List members have seen the Hornbostel-Sachs taxonomy discussed here, as well as references to the taxonomies from the Sibelius and capella notation editors. All of these were taken into consideration during the design.

The first problem we encounter is that categories and taxonomies are a limited method of representation. Often it is better to characterize phenomena along a variety of dimensions. General categorization systems like Hornbostel-Sachs also lack context sensitivity: the purpose for which a categorization is used helps determines what makes categories more or less useful. In my past work as a usability engineer I co-authored a paper about this, influenced by my colleague's familiarity with S. C. Pepper's book on World Hypotheses. (You can find that paper online at <http://michaelgood.info/publications/usability/interface-style-and-eclecticism/>
>.)

Given the state of the art, though, a categorization system provides the most practical solution at this time. The design should not exclude the development of a more dimensional markup of timbre in the future. I think we could add such a description to the score-instrument element, just as we have added the instrument-sound and virtual-instrument elements in version 3.0.

So how should we approach the design of the top-level categories? Given our two goals, we want top-level categories that 1) group similar sounds together and 2) reflect instrument categories as commonly understood by musicians. Hornbostel-Sachs does a wonderful job for use in ethnomusicology and related areas. However, its categorization by sound-producing mechanism contrasts with how most musicians think about things, which is either by the performing mechanism or materials.

Using a combination of these two factors - performing mechanism and materials - leads to the top-level categories we have chosen. That is why these categories look more like Sibelius's categories than capella's categories. Capella leans more towards the Hornbostel-Sachs approach, in part because they believe it works better for sound substitution. We are not trying to support sound substitution in our design. We are instead trying to label instrument sounds for ease of identification by software and people. Different goals will lead to different category schemes.

In most cases performing mechanism and materials are strongly correlated. I used the more common terminology when possible, such as brass, wind, and strings. Brass is a commonly understood category for common Western music notation, even though the category (as of beta 1) also includes the alphorn and didgeridoo. I decided not to have a top-level percussion category, but to promote the different percussion categories to their own top levels.

The playing techniques that we are excluding from the instrument-sound list apply to instruments that have multiple techniques possible. You can use pizzicato for a normally bowed instrument in the strings category, you can bow a normally struck instrument like those in different percussion categories, and so on. Different percussion beaters and striking positions for the same instrument are not included in the instrument-sound list. The same is true for a variety of pedal and amplification effects for guitars and other electric and electronic instruments. We also exclude solo/ensemble distinctions since we already have separate elements for that.

One place where performing mechanism and materials fail to guide us effectively is with electronic instruments and synthetic sounds. Here the quality of sound is more useful than performance mechanism - dominated by keyboards and laptops now, with tablets and phones looking to come on strong in the future. In this case, we follow Sibelius's lead of assigning electronic sounds that are modeled on acoustic sounds into the appropriate acoustic category.

Sibelius has a lot more classification of synthetic sounds in the synth category than was included in our Beta 1 draft. I left most of it out because I did not find it very understandable, even given my (decades-old and thus outdated) experience in sound synthesis during my college days at the MIT Experimental Music Studio. I understand how it helps Sibelius internally as far as sound substitution, but I am not convinced of its effectiveness for musician-friendly sound identification. In Beta 1 we basically try to support the General MIDI Synth and Sound Effect sounds, either through their own synth categories or assignment to an appropriate acoustic category.

We would be very interested in finding other categorization systems for synthetic sounds that could be helpful here. Hornbostel-Sachs does not help much in this area. Do any of you have references that we can use, either in publications or in software?

Best regards,

Michael Good Recordare LLC
Jools Lewthwaite
 
Posts: 11
Joined: March, 2014
Reputation: 0

Re: Sound design rationale: top-level categories

Postby Henry Howey » Mon Mar 21, 2011 7:37 pm

Since the Garritan Sounds are so prevalent, I have found the ability to use SCALA temperament settings for VST instruments very important to the quality of the resulting soundfile.


Michael Good wrote:Hello Henry,

Thank you for the suggestion. At some point we want to explore adding a higher-level representation of temperament to MusicXML. However, that is outside the scope of what we will be doing for MusicXML 3.0. In the meantime, an application could add Scala-compatible information to a MusicXML file using XML processing instructions.

Note that you can specify exact alterations for each note with the alter element, so precise tunings can be specified in a MusicXML file on a note-by-note basis. That has been true since version 1.0. This is an important feature for making use of the new microtonal and Turkish accidentals added in MusicXML 3.0.

Are there particular applications already using Scala information that you would like to see better supported by MusicXML software in the future?

Best regards,

Michael Good Recordare LLC


One other note. Will there be a notation to indicate a SCALA temperament in the setup?
Henry Howey
 
Posts: 26
Joined: March, 2014
Reputation: 0

RE: Sound design rationale: top-level categories

Postby Michael Good » Mon Mar 21, 2011 10:12 pm

Hi Jools,

Thank you for your feedback. As I mentioned, the goals for the standard sounds in MusicXML 3.0 are:

- Help improve score exchange between musicians who use different software applications and sound equipment by providing greater accuracy regarding playback sounds. Greater accuracy in this case is when compared to General MIDI, which is about the best that MusicXML 2.0 can offer.

- Help people reading a MusicXML file to understand what sounds are intended for playback.

Our approach to doing this is to create an extensible category system that 1) groups similar sounds together and 2) reflects instrument categories as commonly understood by musicians.

To answer your specific questions:

- My thinking is that the classification of the didgeridoo under brass is that it helps with criteria 1, perhaps (but not necessarily) at the expense of criteria 2.

- The kazoo is wind.kazoo, not brass.kazoo.

- The thinking is not muddled, but since we're balancing two goals that do not always align, parts of the taxonomy may appear that way. Note that a clean dimensional representation of timbre is not among our goals for MusicXML 3.0. On the other hand, we are not excluding such a representation from being added in the future.

- We are not trying to represent timbre directly, but instead support stand-ins such as virtual instrument libraries and General MIDI sounds.

- We are trying for something more accurate than General MIDI, in particular something that better supports virtual instrument libraries.

- The goblins sound - and much of the synth and effects category - comes from General MIDI. We are trying to improve on General MIDI, but we want to include support for this well-established standard in this taxonomy. I agreed with Anthony that the synth category could use work, and welcome suggestions and reference to prior work in this area.

- I am surprised to see that you consider world instruments to be woefully inadequate in this draft. Most comments we have received have asked why early instruments or modern synthetic sounds are more poorly represented than world instruments. Please share any specific instruments we need to add, or specific problems with the current categorization that could be better solved another way.

- My lip vibration comment was trying to circumvent debates on specific problematic instruments like didgeridoo and alphorn, but clearly that failed. Again, the goal here is to group like sounds together, and current notation software makes similar choices for these instruments. (One program goes one way on the alphorn, while another goes the other way on the didgeridoo.)

- My citation of my earlier paper on the problems with categories was not an appeal to authority but an attempt to explain perspective. Perhaps part of what we have here comes from a clash of perspectives between your mathematical background and my usability background? It may also be because our taxonomy is not meeting your specific application needs. However that is a difficult problem to resolve without knowing what those needs are.

We're trying to solve a very practical problem of loss of playback information when using MusicXML for score interchange. We are not trying to define a mathematical model of timbre, though perhaps that may be a good addition for the future.

We add features to MusicXML 3.0 to solve practical problems for musicians. We can't always tackle an entire problem at once. Here the categorical approach will be sufficient to solve the main interchange problem that our customers have. It's probably the number one problem in using MusicXML 2.0 for score interchange between different applications, and people have been asking us to fix it for years.

Naturally this is Beta 1, so it's still a draft and open to change. We encourage specific suggestions for improvements. For instance, what instruments are missing? We've already incorporated suggestions for adding the nyckelharpa, expanding the crumhorn representation, adding rackett and sackbutts, and more. These will appear in Beta 2. We have also revised the piccolo to be wind.flutes.flute.piccolo.

Is there another categorization we can use that may work better for score exchange? We already know for instance that the synth category is problematic, but how specifically can we improve it?

Best regards,

Michael Good Recordare LLC
Michael Good
VP of MusicXML Technologies
MakeMusic, Inc.
User avatar
Michael Good
 
Posts: 2197
Joined: March, 2014
Reputation: 0

Re: Sound design rationale: top-level categories

Postby Michael Good » Mon Mar 21, 2011 10:17 pm

Hi Henry,

Henry Howey wrote:Since the Garritan Sounds are so prevalent, I have found the ability to use SCALA temperament settings for VST instruments very important to the quality of the resulting soundfile.

Thank you for the explanation. Can you please let me know where are you specifying the Scala temperament settings for the Garritan instruments? Is that within the Aria player, or elsewhere?

I'm trying to determine if the notation programs that would be doing the export and import have the ability to access this information. I am currently on the road without full access to my virtual instrument reference information.

If applications can already get at this information, that's a different situation than if Aria would need to be modified in order for programs to import and export it.

Best regards,

Michael Good Recordare LLC
Michael Good
VP of MusicXML Technologies
MakeMusic, Inc.
User avatar
Michael Good
 
Posts: 2197
Joined: March, 2014
Reputation: 0

Re: Sound design rationale: top-level categories

Postby Henry Howey » Tue Mar 22, 2011 1:38 am

Yes, that is accessed via the aria player. One is also able to indicate a central keynote to further control the relationships that are "meantone-like."


Michael Good wrote:Hi Henry,

Henry Howey wrote:Since the Garritan Sounds are so prevalent, I have found the ability to use SCALA temperament settings for VST instruments very important to the quality of the resulting soundfile.

Thank you for the explanation. Can you please let me know where are you specifying the Scala temperament settings for the Garritan instruments? Is that within the Aria player, or elsewhere?

I'm trying to determine if the notation programs that would be doing the export and import have the ability to access this information. I am currently on the road without full access to my virtual instrument reference information.

If applications can already get at this information, that's a different situation than if Aria would need to be modified in order for programs to import and export it.

Best regards,

Michael Good Recordare LLC
Henry Howey
 
Posts: 26
Joined: March, 2014
Reputation: 0

Re: Sound design rationale: top-level categories

Postby Hartmut Lemmel » Tue Mar 22, 2011 4:44 am

Dear Michael,

you defined the following goals for the taxonomy:
- group similar sounds together and
- reflect instrument categories as commonly understood by musicians.
- score exchange between different software applications, better than General Midi.

The last goal is obtained by any taxonomy. The first two goals partly contradict each other, e.g. in the question whether or not to create a
"keyboard" category.

1) My question is, why has it to be commonly understood by musicians? If I press Export in Finale and Import in Sibelius I never get into touch with the taxonomy. Expert users who really read the MusicXML file can as well deal with a purely timbre oriented taxonomy. I mean, the timbre oriented capella taxonomy is not really hard to read. I admit that we don't use our capella taxonomy to structure e.g. a menu where the user can select an instrument. But that's also not the purpose of the MusicXML taxonomy, is it?

2) I would strongly welcome to define another goal for the taxonomy, namely to create a standard for the sound providers to identify their instruments. All notation programs will have to know the taxonomy anyway because of MusicXML import/export, so the MusicXML taxonomy would be the ideal candidate for a general standard in music software. This means:
- To gain the acceptance of the sound providers, you should ask them what type of taxonomy they prefer (timbre, playing technique, ...)
- The taxonomy would also be used completely separated from MusicXML, and one should think about, how it is extended and maintained in future. In particular, if new sounds appear on the market, the taxonomy should be extended immediately and not two years later with the new release of MusicXML.

To make it short, I really welcome your efforts to create a taxonomy for instrument identification, and for me the details of the taxonomy are less important than its wide use and acceptance also outside of MusicXML.

Best wishes, Hartmut
Hartmut Lemmel
 
Posts: 6
Joined: March, 2014
Reputation: 0

Re: Sound design rationale: top-level categories

Postby Joe Berkovitz » Tue Mar 22, 2011 5:41 am

Michael,

I have been reading the various responses to the sound proposal with great interest. I find a few points in Hartmut's last email that resonate strongly for me:

- Acceptance by sound and software providers is key to the success of this effort.
- Acceptance by human taxonomists is not so important (and it's impossible to please all of them).
- The content of the sound taxonomy will be a moving target, forever, and a mechanism for expansion is essential (and must be orthogonal to MusicXML's release schedule)

I would like ask again why MusicXML should not adopt *any* existing classification scheme that a) lacks strong technical objections, b) has demonstrable traction, and c) is successfully expanding to accommodate new instruments since its origin. I do not see an advantage in doing something different in the name of what feels like only slightly improved readability for human readers. Nor do I see the need to create a scheme that addresses sound substitution, etc., as these can be dealt with by "outboard" ontologies (which will also be a matter of opinion and will vary according to the sound library in use). What matters is to get something out there with a minimum of fuss and a maximum probability of acceptance in the industry.

To that end, Sibelius SoundWorld, capella, or some expanded dialect of Hornbostel-Sachs would all work for me. And your proposed scheme works for me too -- but being new, it carries the additional burden of needing to gain traction.

... . . . Joe

Joe Berkovitz President Noteflight LLC 84 Hamilton St, Cambridge, MA 02139 phone: +1 978 314 6271 www.noteflight.com


Hartmut Lemmel wrote:Dear Michael,

you defined the following goals for the taxonomy:
- group similar sounds together and
- reflect instrument categories as commonly understood by musicians.
- score exchange between different software applications, better than General Midi.

The last goal is obtained by any taxonomy. The first two goals partly contradict each other, e.g. in the question whether or not to create a "keyboard" category.

1) My question is, why has it to be commonly understood by musicians? If I press Export in Finale and Import in Sibelius I never get into touch with the taxonomy. Expert users who really read the MusicXML file can as well deal with a purely timbre oriented taxonomy. I mean, the timbre oriented capella taxonomy is not really hard to read. I admit that we don't use our capella taxonomy to structure e.g. a menu where the user can select an instrument. But that's also not the purpose of the MusicXML taxonomy, is it?

2) I would strongly welcome to define another goal for the taxonomy, namely to create a standard for the sound providers to identify their instruments. All notation programs will have to know the taxonomy anyway because of MusicXML import/export, so the MusicXML taxonomy would be the ideal candidate for a general standard in music software. This means:
- To gain the acceptance of the sound providers, you should ask them what type of taxonomy they prefer (timbre, playing technique, ...)
- The taxonomy would also be used completely separated from MusicXML, and one should think about, how it is extended and maintained in future. In particular, if new sounds appear on the market, the taxonomy should be extended immediately and not two years later with the new release of MusicXML.

To make it short, I really welcome your efforts to create a taxonomy for instrument identification, and for me the details of the taxonomy are less important than its wide use and acceptance also outside of MusicXML.

Best wishes, Hartmut
Joe Berkovitz
 
Posts: 79
Joined: March, 2014
Reputation: 0

Next

Who is online

Users browsing this forum: No registered users and 2 guests

cron