Lucene Document Field Tokens

I was wondering if any other developers had to use a custom Analyzer / Tokenizer or simply custom Tokens for their Lucene Document Fields.

We have a field let’s say “Car List” which would benefit from indexing the car “Brand”, “Name” and “Color” all in one field.

Our goal is to index three tokens in one field: “General Motors”, “Acadia”, “Dark Blue”.
If we index as a text field, it will create a token for “General”, “Motors”, “Acadia”, “Dark”, “Blue”.
This will return result if searching for a Blue General Acadia (which we don’t want to return results, only for Dark Blue General Motors Acadia).

If you had a similar situation, how did you guys implement it?

Thanks for your time.

What we ended up doing is creating our own TokenStream that builds a Field with the three tokens General Motors , Acadia and Dark Blue.

When searching using a PhraseQuery, we were able to search by exact token in the index.