The tools arrived faster than the strategy to use them.
Most of the tourism industry is still optimised for a stage of the purchase journey that no longer exists.
Five years ago, travellers searched. They typed queries into Google, browsed through a page or two of results, clicked on websites, compared options, came back to search for more. The journey was distributed across days and across dozens of touchpoints. At each step, a destination or an operator had a chance to intercept attention, and the ones that invested in visibility at the right moment captured it. It was an imperfect system, but it was legible. You could measure it. You could optimise for it. You could build a strategy around it.
Today, travellers ask. They open a conversational AI and describe what they want. They receive a shortlisted answer in seconds. The dozens of touchpoints have collapsed into one. The question for a destination or an operator is no longer "where do I rank on Google" but the far harder one: "am I in the answer the AI gives." If you are not, the journey moves forward without you. The traveller does not know you were excluded. They book somewhere else.
Tomorrow, and tomorrow is already beginning, travellers will delegate. They will not even ask. They will tell an AI agent their preferences once, and the agent will plan, book, adjust, and confirm on their behalf. The human will see a confirmation, not a list of options. The industry's last remaining leverage point, the moment at which a traveller still decides between alternatives, will have moved inside software. Phocuswright's February 2026 report on agentic AI adoption documents that more than 60% of travel businesses surveyed globally are already experimenting with or scaling agentic AI (Phocuswright, Budgets, Barriers and the Race to Agentic AI, 2026). 2026 is being called, with good reason, the year of agentic AI in travel.
This is not a prediction. It is a description of a transition already underway. The question every destination manager, hotel owner, tour operator, and attraction director should be asking right now is not whether this is coming but how much of their current strategy is calibrated for a traveller behaviour that is becoming less common every quarter.
According to the AI Opener for Destinations survey (October 2025), 99% of European destination professionals have tried AI tools. More than half use them weekly. Fewer than one in four have a systematic, organisation-wide plan for doing so. Read those numbers again. Near-universal experimentation. Rare strategy. The tools arrived faster than the strategy to use them, and the gap is not closing.
The pattern is not European. It replicates in every major tourism market with measurable consistency. The American Hotel & Lodging Association's 2025 technology survey found the same spread: widespread AI adoption in marketing departments, almost none in operations, very little in strategic planning. Destination Canada's internal benchmarking, discussed publicly at the Canadian Tourism Data Collective summit in September 2025, documents the same shape. Singapore Tourism Board's pre-MOU assessment of its own industry, published ahead of the OpenAI partnership in July 2025, found the same gap. Japan's Tourism Agency, which subsidises AI adoption for regional DMOs at up to 50% of cost, built the subsidy programme precisely because the gap was persistent even when tools were nominally free. When a pattern repeats across regulatory environments, business structures, and tourism maturity levels this consistently, it is not a cultural phenomenon. It is a structural one.
That gap is what this playbook addresses.
Not AI theory. There is plenty of that already, most of it written by people who have never had to explain to a small hotel's marketing manager why the last year of TripAdvisor optimisation is now partially wasted effort.
Not enterprise transformation frameworks. The average European tourism business has fewer than ten employees. Frameworks designed for a 500-person marketing organisation with a data science team and a legal department solve problems that most readers of this playbook do not have.
What this playbook addresses is the practical question every operator and destination manager is actually asking. What do I do first. In what order. With what tools. What do I need to know before I start. What can I safely ignore. Where are the irreversible mistakes. Where are the reversible experiments.
I have spent the last three years working on AI readiness with destinations and operators across Italy and Europe. Regional DMOs with three staff and a budget that would not cover two months of a Silicon Valley product manager's salary. Family-run hotels where the owner does the marketing, the reception, and the accounts, and does not have a spare evening to read a 200-page regulatory guidance document. Tour operators with deep local expertise and thin digital capacity. The same pattern appears in every conversation, which is itself instructive: awareness is high, adoption is scattered, strategy is absent. This playbook is the structured version of those conversations. Written from Europe. Applicable worldwide.
The traveller side
A word on the traveller side, because the operator-side numbers tell only half the story and the half they tell is the less urgent half.
According to Phocuswright's U.S. Consumer Travel Report for the first half of 2026, based on a sample of 1,570 qualified respondents, 56% of U.S. leisure travellers used Artificial Intelligence for trip planning, booking, or in-destination assistance in the past twelve months (Phocuswright, The AI Surge: Travel's Fastest Behavioral Shift in a Decade, 2025). Six months earlier, the same question returned 43%. A year earlier, 33%. In twelve months the adoption rate has grown by 70% of its own base. The share that uses AI extensively for travel, meaning across multiple stages of a single trip, doubled in that same period, from 15% to 23%. Phocuswright's analysts describe this as the fastest behavioural shift they have measured in travel over the past decade. That comparison is not hyperbole. Phocuswright's longitudinal database runs back to the early adoption phases of online booking, of mobile booking, of OTA dominance. None of those transitions moved this fast in their first measurable window.
"AI has crossed the threshold from curiosity to utility. Travelers are past the point of experimenting now. They are integrating AI into the core of how they research and shape their trips."
Pete Comeau, Managing Director, Phocuswright, November 2025What makes the data harder to dismiss is the generational distribution. If AI travel use were concentrated among Gen Z and Millennials, one could argue the industry still has time, because those travellers are disproportionately budget-conscious, book directly less often, and influence the wider market less than their share of trips would suggest. But the 1H26 data removes that argument. Every generation posted double-digit gains in just six months. Baby boomers, historically the most resistant demographic to any digital transition in travel, more than doubled, from 13% to 27% (Phocuswright, 1H26). Gen X reached 50%. Millennials reached 74%. Gen Z stabilised at 72%, which is significant in its own way because it suggests the Gen Z curve is beginning to flatten at a saturation point that the older cohorts are still approaching. This is not a niche behaviour at the edge of the market. It is the market.
There is also a quieter shift hidden inside the same dataset. Travellers using AI for trip planning are a high-value segment. They skew younger on average, yet report higher median household income, take more trips including international ones, and spend more annually on travel than non-users (Phocuswright, Search Slips AI Surges, November 2025). Among AI users, the vast majority made at least one concrete travel decision from the results they received. This is not an experiment being run for entertainment. It is a purchase behaviour that is already converting.
The data is U.S.-sourced, which deserves an honest note. Phocuswright's primary research panel is American, and the 1H26 wave sampled American respondents. European adoption runs directionally parallel, with cohort-specific lag of roughly six to twelve months at this point in the curve, based on cross-referencing with the European Travel Commission's own tracking data and with Phocuswright's Travel Forward 2026 observation that European usage "trails due to regulatory headwinds" but is "rising steadily across all markets." That lag will continue to shrink, as it did during the mobile booking transition, where Europe closed a comparable gap with the U.S. in about eighteen months once the behaviour stabilised. The signal across both markets is the same. The traveller side of this equation is moving faster than the operator side, and the gap is widening every quarter.
"The gap is not between those who have tried AI and those who have not. It is between those using it with a plan and those experimenting without one."
AI Opener for Destinations, Survey findings, October 2025Why Europe
This playbook uses Europe as its primary reference frame. Not because the principles are European. Because the European market operates under the world's strictest AI regulation, the most complex multilingual environment, and a predominantly SME business structure that leaves almost no room for waste. The EU AI Act is the strictest. GDPR is the strictest. The language matrix is the densest. The budget per organisation is the smallest. If an AI strategy works here, it works anywhere. Build for the hardest version of the problem and the easier versions come free.
This is a principle worth stating explicitly, because it inverts the default assumption in most tourism technology discourse. The default assumption is that innovation moves from the most permissive regulatory environments, where experimentation is easiest, outward to the stricter ones, where it must be adapted and often watered down. The empirical evidence of the last two decades in European tourism contradicts this. The destinations that adapted earliest to GDPR were stronger, not weaker, in the five years that followed, because they had been forced to build clean data practices that their less-constrained competitors had not. The same dynamic is now beginning to play out with AI. The operators that build their AI practices to European regulatory standards are not losing ground. They are compounding an advantage that will become visible when other markets face equivalent regulation, which is already happening in Canada, the UK, Japan, Singapore, South Korea, and Brazil, with national AI frameworks either adopted or in active consultation through 2025 and 2026.
The cases in this playbook come from Slovenia and Singapore, from Glasgow and Japan, from Valencia and Canada. They are documented cases with metrics, partners, and published analyses. The transferable lessons, as the later sections will show, are the same regardless of where the case originated.
That is the context. The following sections build from it.
Why this playbook starts from Europe
Europe is not harder than other markets. It is the most demanding version of challenges every tourism market faces. Build for this standard, and you are ready for anywhere.
There is a strategic principle in engineering that applies, almost unchanged, to tourism and AI: design for the worst conditions you expect to encounter, and everything easier takes care of itself. Bridges are designed for the hundred-year storm. Airplanes are certified for failure modes the operator will never see in a full career. Data centres are built for thermal loads most equipment will never approach. The reason is not pessimism. It is compounding. A system built to the highest standard in its category tolerates every scenario below that standard without modification. A system built to an average standard fails at the edges.
European tourism, for structural reasons none of its operators chose, has become the hundred-year storm of AI adoption. Three conditions make it so, and each of them is becoming more demanding, not less, over the next twenty-four months.
Regulation
The European AI Act entered into force on 1 August 2024. It is the world's first comprehensive legal framework for AI, and its phased implementation is already reshaping how every tourism business in Europe, and every non-European business serving European travellers, can deploy AI tools (European Commission, Regulation (EU) 2024/1689).
The first two tiers of obligations are already in force. Since 2 February 2025, a list of AI practices is prohibited outright across the EU, with penalties up to €35 million or 7% of global annual turnover, whichever is higher. None of the prohibited practices are common in tourism, which is a genuine relief: no one in this industry is running social scoring systems or mass biometric surveillance. But the same deadline activated a less-discussed obligation that does apply directly: AI literacy. Since February 2025, every organisation operating in the European market must ensure that employees involved in the use or deployment of AI systems have adequate AI literacy. This applies to AI users, not just AI providers. A destination marketing office using ChatGPT to draft content is covered. A hotel using an AI chatbot on its website is covered. The obligation is real. The enforcement is just beginning.
Since 2 August 2025, General-Purpose AI models are subject to transparency and copyright compliance obligations. This affects the vendors, OpenAI, Anthropic, Google, Mistral, Meta, not the users directly, but it changes what users can expect from the tools. Every major provider has now published training-data summaries and copyright-compliance documentation in response to this deadline, and that documentation is a legitimate input to how an operator chooses vendors.
The next major deadline is 2 August 2026, four months from the moment this playbook is first published. That is when Article 50 becomes enforceable. Article 50 is the transparency layer that affects almost every tourism AI use case in practice. It requires that users be informed, clearly and at the moment of interaction, when they are dealing with an AI system. A chatbot on a hotel website will need a disclosure. An AI-generated response in a customer-service channel will need a disclosure. Synthetic content, including AI-generated images of a destination, will require machine-readable marking where technically feasible. These are not heavy obligations. They are cheap to meet. But they must be met by 2 August 2026, and the organisations that do not have a plan in place four months from now will be non-compliant at enforcement.
The high-risk tier of the Act takes effect 2 August 2027. Most tourism applications will not fall into it, but some will: any AI system used for employment decisions, for example, or for biometric identification, or for credit-scoring of customers. The mapping is worth doing. It is usually a one-afternoon exercise. It is not one to defer.
Member states have started layering additional national rules on top of the Act, and this matters for operators serving specific markets. Italy's own Artificial Intelligence Law (Law No. 132/2025) entered into force on 10 October 2025, with administrative fines up to €774,685 and, in serious cases, disqualifying measures under Decree 231: suspension of licences, exclusion from public procurement, prohibition on advertising goods or services (Law No. 132/2025, Gazzetta Ufficiale, October 2025). For a hotel or tour operator, losing the ability to advertise for a year is effectively a business-ending sanction. The national penalties vary across member states, but the Italian example is representative of the direction.
GDPR has governed personal data since 2018 and remains fully applicable to AI use. Every booking record is personal data. Every review with an author is personal data. Every AI chatbot conversation that identifies a user is personal data. Using AI tools with personal data requires a lawful basis for processing, a Data Processing Agreement with the vendor, and data minimisation in practice. These are not difficult to meet with any major AI platform: every serious vendor now publishes EU-compliant DPAs. They are trivial to access, usually a one-click acceptance in the admin panel. But they must be in place, and the operator must know they are in place.
NIS2, which set cybersecurity expectations from 17 October 2024, touches tourism less directly but is increasingly relevant through supply chains. Most independent tourism businesses are not in scope. Those that operate within larger destination infrastructures, regional DMO platforms, or national booking systems often are, by supply-chain exposure. At minimum, NIS2 sets an expectation that an operator knows which external tools can access its systems and has basic incident awareness in place. AI tool adoption is a natural prompt to review this.
In November 2025 the European Commission introduced a Digital Omnibus proposal, a package to streamline and consolidate the AI Act, GDPR, NIS2, and adjacent frameworks into a more coherent compliance architecture. It is still in legislative process as this playbook is published. It will likely simplify, not complicate, the current picture, but it will not change the core obligations. Plan for the current framework.
These are not obstacles. They are the operating environment. Other markets are now tracking toward equivalent frameworks. Canada's Artificial Intelligence and Data Act is in late-stage parliamentary review as this playbook is published. The United Kingdom declined to adopt a cross-economy AI law in 2025, leaning instead on sector regulators, which creates a different, but not lighter, operating environment. Singapore's Model AI Governance Framework has been progressively formalised through 2024 and 2025. Japan's Hiroshima AI Process Code of Conduct has been active since late 2024. South Korea passed its AI Basic Act in January 2025, with full implementation in January 2026. Brazil's AI Bill (PL 2338/2023) is in advanced committee stage. The regulatory direction is global. Europe is simply ahead.
The organisations that build compliant AI practices now are not just managing risk. They are building ahead of where every other market is heading. This is the first compounding advantage of starting from the hardest version.
Language
Europe has 24 official EU languages and dozens more in active commercial use. A mid-sized hotel in Andalusia needs AI visibility in German, French, British English, Dutch, Italian, and Spanish simultaneously. A ski resort in the Dolomites serves guests in Italian, German, Polish, Czech, and Russian in the same season. A DMO in Lisbon markets to source markets whose preferred travel research language is distinct from the destination's own language in most cases. None of these situations is unusual. All of them are complex.
This would be a technical problem even if every AI tool worked equally well across every language. It does not. Large Language Models are not uniform in their linguistic coverage. English remains the language of densest training data, tightest factual grounding, and most frequent update cycles. European languages trail English by varying distances. Italian and German are close to English quality in most current models. French and Spanish are close. Dutch, Portuguese, and Polish are measurably weaker. Maltese, Estonian, Slovene, and many other smaller EU languages are significantly weaker still. When a traveller researches a destination in their native language, the AI tool producing the answer is working with different fidelity depending on which language they chose.
The same underlying destination can therefore produce different AI answers depending on the query language. A hotel that appears prominently in an English-language query may be invisible in a Polish-language query, not because the hotel does anything wrong, but because the AI's model of that hotel in Polish is incomplete. This is not a theoretical problem. Operators working on AI visibility with multilingual sources can usually reproduce this gap in five minutes across any three languages.
Multilingual AI visibility is not uniquely European. It is the challenge facing any destination that draws visitors from multiple language markets. Canada operates in English and French and draws travellers from dozens of language groups. Singapore operates in four official languages, English, Mandarin, Malay, and Tamil, and its Tourism Board's Mafengwo partnership, launched in 2025, specifically targets AI-personalised discovery for Chinese-language travellers. Morocco, Japan, Thailand, Mexico: every major tourism market with strong inbound travel from multiple language regions faces a version of this. Europe simply faces more languages at once, with denser proximity, and with a regulatory environment that constrains shortcuts.
The GEO (Generative Engine Optimisation) and AEO (Answer Engine Optimisation) strategies in this playbook, covered in depth in Section E, are built for this complexity. They transfer directly to any multilingual context, because the mechanism, structuring content so that AI engines can produce consistent, high-fidelity answers about you regardless of query language, is the same everywhere. A Singapore hotel applying the techniques to English and Mandarin would follow the same discipline as a Barcelona hotel applying them to English, German, and Spanish. The specifics scale. The discipline does not change.
Structure
More than 90% of European tourism businesses are SMEs, according to Eurostat's 2024 tourism sector statistics. Most run without a dedicated digital team, without an in-house data function, and without an external technology advisor on retainer. The average European hotel has fewer than 30 rooms. The average tour operator has fewer than 10 employees. The average regional DMO has fewer than 15 staff, counting everyone from the director to the intern.
This shape of the industry is not unique to Europe. It is the global norm. Tourism is a sector dominated worldwide by small operators, independent guides, family-run accommodation, and regional destination bodies with limited budgets. The proportion varies, but the dominance of small actors is near-universal. Australia's accommodation sector is more than 70% SME-owned. New Zealand's hotel inventory is 56% properties with fewer than 50 rooms. Japan's regional tourism boards are almost all under 20 staff. Thailand's tour operator market is dominated by local guides working independently. The relevant question for AI adoption is therefore not what a large DMO with a data science function can do. It is what a regional destination marketing office with three staff, or a 20-room hotel in Porto, or Queenstown, or Chiang Mai, can realistically implement with one afternoon per week of internal bandwidth.
Here is the camera flip that matters.
Most AI discourse in tourism is written by and for the largest 5% of organisations. Large hotel groups. International OTAs. Tier-one DMOs with data teams. National tourism organisations of G7 countries. Their problems are real, but their solutions are shaped by resources the other 95% of the industry does not have. When a consultancy talks about "AI transformation," they almost always mean a 12-month strategic engagement that costs what an independent hotel earns in a quarter. When a vendor talks about an "AI-native destination intelligence platform," they usually mean a six-figure annual contract.
The 95% need a different kind of guidance. Not simpler, but different. Specific, actionable, cheap at the start, phased in order of leverage, and written in the language the operator actually uses. This playbook is written for that 95%. Everything in it assumes the reader is understaffed, underfunded, and working against a calendar that will not pause while they adapt to AI. The large operators will find it useful too, because the fundamentals are the same at every scale. But it is not primarily written for them.
That is who this is written for. That is why it starts where it does.
You think the traveller did not find you.
The traveller was never looking.
The travel purchase journey used to take time. Awareness, research, consideration, booking: each stage stretched across days or weeks, and at each stage a destination or operator had a chance to intercept the decision. That model has compressed into a conversation.
For twenty years, the mental model of a travel purchase was a funnel. A traveller became aware of a destination, often through a trigger event: a friend's photograph, a magazine feature, a piece of news. Awareness moved into research: Google searches, blog posts, TripAdvisor, Booking.com comparisons, direct visits to hotel websites, forum threads on Reddit or Lonely Planet. Research moved into consideration: shortlisting, price-checking, calendar-checking, asking partners. Consideration moved into booking: on a website, through a travel agent, or through an OTA. The funnel had leaks at every stage, but it was also structured enough that a destination or operator could design interventions at each point and measure the impact of each.
The funnel still exists. It just compressed. What took weeks now takes minutes. What took dozens of touchpoints now takes one conversation.
A traveller today opens ChatGPT, Perplexity, Claude, Gemini, or Copilot and describes what they want: "I have five days in late November, I want to go somewhere in Europe that is still warm, I like walking and food and not crowds, my budget is around three thousand euros for two people." Within seconds, they have a shortlisted answer drawing from dozens of sources simultaneously: travel blogs, TripAdvisor aggregates, operator websites, weather data, flight availability. The tool produces three or four destinations with reasons. It produces sample itineraries. It recommends specific accommodation and experiences. In many cases, the traveller books without ever visiting a single destination website or operator page during the early stages of discovery.
This is not a future scenario. According to Phocuswright's 2H25 Search Slips, AI Surges report, generative AI platforms now account for 15% of travel research tool usage among U.S. travellers, up from 6% in late 2023. General search, still the largest single tool, has declined from 51% in late 2024 to 36% by the second half of 2025. A fifteen-point drop in twelve months is not a plateau. It is a reallocation of attention at speed, and it is accelerating. Phocuswright's own analysts note that the question is no longer whether AI discovery will overtake traditional search for trip planning. It is when.
Standalone AI platforms, meaning tools like ChatGPT and Claude accessed directly, are the most-used. 64% of travellers who used AI for trip planning used a standalone AI platform (Phocuswright, 1H26). 81% of AI users identified standalone AI platforms as the most useful environment for AI-powered trip planning. This matters for operators because standalone AI platforms are the ones where you have the least direct influence over the output. You cannot buy ads inside ChatGPT's answers. You cannot, yet, bid for position in Perplexity. The AI's answer about your destination is constructed from whatever the model has learned about you from its training data and real-time retrieval, and if that representation is incomplete, distorted, or outdated, the answer is too.
The camera flip
Here is the camera flip that most operators miss.
You think the traveller did not find you. The traveller was never looking for you. They asked an AI. The AI did not include you. The conversation moved on. The traveller did not reject your offer. They never saw your offer. You were not lost at the booking stage, where you have been optimising for years. You were excluded upstream, before your channels were even consulted.
This reframing has operational consequences. The marketing spend most operators allocate to the visibility stage is calibrated for a world where visibility happens on Google. If visibility is now increasingly decided inside large language models, the allocation is partially misaligned. Not fully wrong: Google still matters, and will continue to matter for the verification layer discussed below. But directionally misaligned. The calibration needs to shift, and the shift is not a simple reallocation of budget. It is a different set of activities, tools, and measures.
These numbers tell the same story from different angles. Visibility is moving upstream. Traffic is declining at the validation layer. AI-mediated discovery is growing. Behaviour is spreading across every demographic.
But one finding cuts against the most alarmist version of this narrative, and it is worth understanding precisely because it changes how you respond.
According to Phocuswright (1H26), 51% of travellers who received AI search answers still clicked through to source websites for more information. Only 8% found the AI response sufficient without visiting any website. This finding deserves a direct quote from Mike Coletta, senior manager of research and innovation at Phocuswright:
"Half of travelers who used AI in search engines told us they still clicked through to source websites after seeing AI answers in search. This violates the common narrative of a zero-click world. AI is definitely reducing click-through in search overall, but travel is much more resilient because it is higher-stakes and verification-heavy, especially in the transaction phase."
Mike Coletta, Senior Manager Research & Innovation, Phocuswright, November 2025Travel is more resilient to zero-click behaviour than other sectors because trip decisions are higher-stakes and verification-heavy. People do not book a hotel in Lisbon the way they look up a recipe. They read. They check. They compare. They ask a partner. They sleep on it. They come back the next day with fresh questions. The AI gives them a candidate answer. Their own verification process decides whether that answer converts.
Your website is not obsolete. It is further back in the sequence than it used to be. Discovery now happens inside AI. Verification and decision still happen on your site, on your Google Business Profile, in your reviews, in the conversation on a friend's group chat about your destination. The shift is in the sequence, not in the destination. But if you are absent in stage one, that 51% who would have clicked through never gets the chance. You do not get to the verification round because you were not in the draft.
Three stages, not four
Discovery and research, which used to be two distinct stages taking days across multiple platforms, now happen inside a single AI conversation. What remains is validation and booking. Validation is the stage where the traveller checks whether the AI's recommendation holds up: does this hotel actually have the amenities described, does this destination actually fit the criteria, are the reviews consistent with the AI's summary. Booking is the stage where the transaction happens, through a direct channel, an OTA, or increasingly, through an agent.
You cannot skip stages two and three. The traveller will verify. The traveller will book. These are irreducible. What you can do, and what most of the rest of this playbook is about, is make sure your destination or operation is present and accurately represented in stage one, because stages two and three only happen for candidates that made it through stage one.
The compression of the funnel is not, on net, bad news for operators who understand the shift. It is catastrophic news for operators who do not.
Four channels. Only three are under your control.
Your digital presence is no longer your website alone. It is the sum of four distinct channels, each speaking to AI tools in a different way. Three of them you control. One you cannot.
For most of the last two decades, an operator could get away with thinking of their digital presence as their website, with social media and review sites as auxiliary. That mental model is now actively misleading. Large language models, which are the engines driving the shift described in the previous section, build their representation of a destination or business by aggregating signals across multiple sources simultaneously. A website alone, however well-optimised, does not produce a complete or accurate representation of a business in the model's view. It produces one contributor to that representation.
The four channels that matter, in the order of how tightly you can control them, are the following.
The canonical source for who you are, what you offer, and how to book. The only channel where you fully control wording, structure, facts, and updates. Everything else is downstream of it.
AI tools cross-reference what you publish against GBP. Inconsistency between your site and GBP is a primary reason AI tools refuse to recommend a business: they cannot tell which version is true.
Not a traffic source. A signal source. An account dormant for months tells AI tools the business is neglected. Regular, specific posts mark a business as active and confident in its offer.
Reviews, third-party listings, forum threads, AI-generated summaries of your business. Your reputation as AI sees it is the aggregate of everything online, not the subset you created.
The website layer
Your website is the canonical source for who you are, what you offer, and how to book. It is the only channel where you fully control the wording, the structure, the facts, and the updates. Everything else is downstream of it.
For AI visibility, the rules are straightforward and mostly cheap to implement. Clean HTML with proper heading hierarchy. Schema markup, particularly the LocalBusiness, Hotel, and TouristAttraction schemas from Schema.org. Factually consistent content across pages: your opening hours on the homepage must match the contact page, which must match the booking engine, which must match the structured data. FAQs written in the language a real traveller uses, as complete questions and answers, not as decorative marketing copy. Page load speed under three seconds. Mobile-first formatting, not mobile-adapted desktop formatting. A robots.txt that does not accidentally exclude the AI crawlers that matter: OpenAI's GPTBot, Anthropic's ClaudeBot, Perplexity's PerplexityBot, and the major search crawlers.
Most websites in the tourism industry fail on at least three of these basics. Not because the teams do not know they matter, but because the websites were built five years ago when these specific optimisations did not exist, and no one has had the time or the budget to revisit them systematically. The good news: auditing and fixing these is a matter of days, not months, and the return on AI visibility is disproportionate to the effort.
This is the Answer Engine Optimisation layer, discussed in depth in Section E. It is the single highest-leverage thing most operators can do in the first ninety days of their AI adoption journey, and it costs almost nothing beyond the time to execute.
The Google Business Profile layer
AI tools do not take your website at face value. They cross-reference what you publish on your site against other authoritative sources, and for a local business, the single most authoritative source is your Google Business Profile (GBP), previously known as Google My Business. Opening hours, exact address, photos, services offered, amenities, payment methods accepted, languages spoken, accessibility features.
Inconsistency between your site and your GBP is one of the most common reasons AI tools refuse to recommend a specific business. The model sees two versions of the truth and cannot tell which is correct. Faced with uncertainty, the model routes around the business and recommends something where the signals align. This is a common failure mode, and it is almost always invisible to the operator, because the operator never sees the query they were not included in.
GBP completeness is not a one-time project. It is a maintenance discipline. Hours change seasonally. Services evolve. New photos are needed quarterly to signal the account is active. Reviews need responses, not because the algorithm rewards it mechanically, but because an unanswered review cluster signals to both AI tools and humans that the business is not attending to its own presence.
The good news on GBP: it is free, and the effort to maintain it well is about thirty minutes per week for a small property. The bad news: a neglected GBP is an actively harmful signal. It is better to have no GBP than an outdated one with mismatched information, because an outdated GBP pollutes the signal across every downstream tool.
The social layer
Social media, for most of the last fifteen years, was understood primarily as a traffic channel. An operator would post content, hope for engagement, and measure success in clicks and conversions. That understanding is now outdated.
For AI visibility, social media matters as a signal of activity and voice, not as a traffic source. An Instagram account that has posted three times in the past six months tells AI tools, and humans, that the business is either dormant, understaffed, or disengaged. Regular posts mark a business as active, current, and confident in its own offer. A business that shows up every week with specific, local, grounded content is a business the AI tools can more easily cite with confidence.
You do not need to post daily. You need to post consistently. A small operator with one weekly post featuring a specific dish, a specific view, a specific guest experience, told in their own voice, produces a better AI signal than a large operator with daily generic content that looks templated. The specificity matters more than the volume.
There is also a quiet secondary effect. Social media content is increasingly indexed by AI tools, either directly (through platform integrations) or indirectly (through scraping of publicly visible posts). A destination or business with a rich, specific social footprint contributes to its own AI representation. A destination or business with a thin or generic social footprint contributes nothing to it.
Phocuswright's 2H25 research notes that social networks held steady and gained slightly as a travel research tool, from 16% in late 2023 to 19% in 2025. This is a small change, but it is directionally important. Social media is not dying as a travel research channel. It is converging with AI as part of a broader set of tools travellers use in parallel.
The digital traces layer
This is the channel you do not control. Reviews on TripAdvisor, Booking.com, Google, Expedia, Yelp. Third-party listings on DMO websites, tour marketplaces, niche travel blogs. Mentions in forum threads, Reddit discussions, Facebook groups, industry newsletters. Press coverage in local, national, and international media. Podcasts. YouTube travel vlogs. AI-generated summaries of your business that now exist inside language models themselves.
Your digital traces are the aggregate of everything that exists about you online, across every source, including sources you were never aware of. This is the layer that most often contradicts what you say about yourself on your own channels, and when AI tools face that contradiction, they weight external sources more heavily than self-published ones, because external sources are treated as more objective.
You can influence this layer. You cannot control it.
You can respond to reviews thoughtfully and consistently, which changes the tone of your public record. You can build relationships with local press and industry writers, which generates the kind of coverage AI tools weight highly. You can encourage satisfied guests to share their experience on their own channels, which adds diversity to your public footprint. You can monitor what is being said about you in real time, using tools like Google Alerts, Mention, or dedicated reputation platforms, which lets you identify and address emerging narratives before they calcify.
What you cannot do is unilaterally rewrite the record. If reviews describe a specific problem with consistency, fixing the reviews requires fixing the problem, not editing the reviews. If AI tools have formed a particular representation of your business, changing it requires changing the underlying signal landscape, not gaming the model.
This is where the most common strategic mistake happens. Operators and destinations invest heavily in the first channel (the website), moderately in the second and third (GBP and social), and treat the fourth as either inevitable or manageable through a single reputation management vendor. The fourth channel is neither inevitable nor manageable through a single vendor. It is the channel that most accurately reflects how the market actually perceives you, and it is the channel AI tools pay the most attention to when they disagree with your own self-presentation.
"Your website is what you say about yourself. Your digital traces are what AI thinks is true about you. The gap between the two is your vulnerability."
Mirko Lalli, 2025The four channels work together. Each informs the others. Each contributes to the composite representation that AI tools produce when a traveller asks about you. The operators making progress in AI visibility are the ones who understand the four channels as a system, not as separate projects with separate budgets. The operators falling behind are the ones treating their website as the only channel that matters, while their GBP is outdated, their social is dormant, and their review score is drifting without response.
Three EU regulations. The minimum you need to know.
These three frameworks define the legal perimeter of AI adoption for any business processing data about European travellers. Non-EU businesses are affected when they serve EU customers. The links below go to European Commission official sources.
The purpose of this section is not to provide legal advice. It is to equip the reader with enough understanding to have a competent conversation with their lawyer, their data protection officer, or their technology vendor, and to know which questions to ask. Full legal analysis is covered in Section G.
There are three frameworks. They are cumulative, not alternative, and they interact with each other in ways that matter for operational decisions.
Aug 2024
The world's first comprehensive legal framework for AI, entered into force 1 August 2024. Four deadlines matter for tourism: 2 Feb 2025 (prohibited practices banned + AI literacy obligation active); 2 Aug 2025 (GPAI providers must comply with transparency and copyright); 2 Aug 2026, four months from now (Article 50 transparency for chatbots, AI-generated content, synthetic media); 2 Aug 2027 (full applicability for high-risk AI systems). Penalties: up to €35M or 7% of global turnover.
EU AI Act: European Commission official overview →May 2018
Bookings, reviews, inquiries, analytics, AI chatbot conversations. All of it is personal data when it can be tied back to an identifiable person. The GDPR obligations that matter most for AI adoption: a lawful basis for processing, a Data Processing Agreement with every AI vendor you send data to, and data minimisation (do not send the AI tool more than it needs). These are not difficult to meet with any major AI platform. They do require a ten-minute check before adoption.
GDPR: European Commission data protection hub →Oct 2024
Most independent tourism businesses are not directly in scope. If you operate within a larger destination infrastructure, a regional DMO platform, or a national booking system, you may be part of a supply chain that is. At minimum, NIS2 sets the expectation that you know which external tools can access your systems and have basic incident awareness in place. If you have not reviewed this recently, the AI tool adoption process is a good prompt to do so.
NIS2 Directive: European Commission official overview →EU AI Act in detail
The Regulation (EU) 2024/1689, known as the EU AI Act, is the world's first comprehensive legal framework for AI. It entered into force on 1 August 2024 and applies in phases. For a tourism operator, four deadlines are relevant.
2 February 2025, already enforced. Prohibited AI practices are banned, with penalties up to €35 million or 7% of global turnover. None of the prohibited practices are common in tourism. In the same deadline, a separate obligation took effect that does affect tourism directly: AI literacy. Since February 2025, every organisation operating in the EU must ensure employees involved in AI use or deployment have adequate AI literacy. This applies to you even if you are a small operator using ChatGPT to draft content. What counts as adequate AI literacy is not precisely defined, but the European Commission's guidance suggests, at minimum, that users understand what the tool does, what its limitations are, and when its output should be reviewed before being used. A thirty-minute internal training document, updated yearly, typically satisfies this for a small team. Larger organisations need something more formal.
2 August 2025, already enforced. General-Purpose AI (GPAI) model providers must comply with transparency and copyright obligations. This affects vendors, OpenAI, Anthropic, Google, Meta, Mistral, not tourism businesses directly, but it changes what you can expect from the tools. Every major provider now publishes training-data summaries. You can use that documentation when you evaluate vendors.
2 August 2026, four months from this playbook's publication. Article 50 transparency obligations take effect for AI systems with direct user interaction. A chatbot on your website will need to clearly disclose that it is an AI. AI-generated content in your customer-service channels will need disclosure. Synthetic content, including AI-generated images of your destination, will require machine-readable marking where technically feasible. These obligations are cheap to meet: a sentence on the chatbot, a small icon on synthetic imagery, a clear disclosure in automated emails. They must be in place by 2 August 2026. Non-compliance carries fines up to €15 million or 3% of global turnover.
2 August 2027. Full applicability for high-risk AI systems. Most tourism applications will not fall into the high-risk tier. Some will, specifically: AI systems used for employment decisions (including hiring and workforce management), for biometric identification, for credit-scoring of customers, or for determining access to essential services. If your business uses any of these, you need a detailed compliance plan with conformity assessments and EU database registration. If your business does not use any of these, the 2026 deadline is your priority, and 2027 is a horizon to monitor but not to worry about today.
Italy's national implementation (Law No. 132/2025, in force since 10 October 2025) is notable because it establishes administrative fines up to €774,685 and, in serious cases, disqualifying measures under Decree 231, which include suspension of commercial authorisations, exclusion from public contracts, and a prohibition on advertising goods or services. For a small hotel or tour operator, these disqualifying measures are existential. Other member states are finalising their national implementations through 2026. The penalties vary, but the enforcement direction is clear.
What this means in practice
For a small-to-medium tourism operator adopting AI in 2026, the compliance checklist is shorter than it appears. By 2 August 2026: AI literacy training documented and updated annually; transparency disclosure on every AI chatbot and AI-generated content channel; DPAs signed with every AI vendor; data minimisation practiced in every workflow; a basic inventory of which AI tools access which data.
This is a week's work of setup, and then a monthly hour of maintenance. It is not an excuse not to adopt AI. It is the operating discipline that makes AI adoption sustainable.
What you actually control
The list of things you cannot control is long. Source market behaviour, OTA algorithms, exchange rates, what AI systems have been trained on. That list is not getting shorter.
One of the recurring patterns in conversations with operators over the last three years is a kind of learned helplessness specific to the digital transition. The operator knows the landscape has shifted. The operator knows their position in that landscape is weakening. The operator also believes, often correctly, that most of the forces reshaping their market are outside their ability to influence. Source markets rise and fall on political and economic currents. OTA algorithms change without warning. Exchange rates move. What a particular AI model knows about a particular destination was mostly decided when the model was trained, in a process the operator had no input to.
The helplessness is understandable. It is also, in practical terms, wrong. The list of uncontrollable things is long, but the list of things an operator actually controls is not zero, and more importantly, the controllable list is what determines outcomes. Not the uncontrollable list.
The list of things you can act on is smaller than the list of things you cannot act on. It is also what actually determines whether you succeed or fail in the next twenty-four months. Four items. Short list. Every operator has them.
The structure of your website, the completeness of your GBP, the consistency of your information across every platform that references you. These determine whether AI tools recommend you or not. Actionable today, at low cost, by any organisation regardless of size. This is the first thing to address, before choosing any AI tool or subscribing to any platform.
The quality of your booking records, reviews, pricing history, and visitor behaviour. AI tools produce outputs that are only as good as the data they draw from. The organisations that invested in clean, connected data before AI became available are now extracting disproportionately more value from it. This is addressed in depth in each vertical section of this playbook.
The ability to choose the right tool for a specific task, prompt it effectively, review its outputs critically, and recognise when it is wrong. Not a technical skill. Professional judgment applied to new tools. Capability differentiates outcomes, not access. The path that works is peer learning anchored in real work, not generic external training.
The specificity of what you publish about your destination or business. Generic content is invisible to AI systems and unconvincing to humans. The organisations that describe what they actually offer, in the language their visitors actually use, are the ones that get cited, recommended, and booked.
AI search visibility
The structure of your website, the completeness of your GBP, the consistency of your information across every platform that references you, the AEO and GEO disciplines covered in Section E. These determine whether AI tools recommend you or not.
This is actionable today, at essentially no cost, by any organisation regardless of size. A hotel with a €200-a-year hosting plan can implement better schema markup than a hotel with a €200,000-a-year technology budget, because the discipline is about craft, not about spend. A regional DMO with three staff can build better FAQ structure for AI-friendly content than a national DMO with thirty, because the smaller team moves faster and knows their destination better.
This is the first thing to address, before choosing any paid AI tool or subscribing to any platform. It is the foundation. Every subsequent action compounds better on top of good visibility foundations than on top of weak ones.
Your data
The quality of your booking records, reviews, pricing history, visitor behaviour, email lists, CRM segmentation. Every AI tool produces outputs that are only as good as the data they draw from. An AI demand forecasting tool applied to messy booking data produces messy forecasts. An AI content generator applied to a clean brand guideline document produces on-brand content. An AI customer-service assistant trained on a thin FAQ produces thin answers.
Valencia's case, covered in Section I, is the clearest published example of this principle in tourism. Valencia became the 2024 European Capital of Smart Tourism not because it adopted AI earliest or spent the most. It spent years before AI was generally available cleaning and connecting its destination data across municipal systems, tourism infrastructure, and private operators. When AI tools became practical in 2023 and 2024, Valencia was able to deploy them immediately and effectively, because the underlying data was ready. Destinations without that data foundation are still, three years later, struggling to get AI tools to produce useful outputs, because the outputs are only as good as the inputs.
The uncomfortable truth here is that data is a multi-year investment, and most operators did not start it. For those who did not, the answer is not to despair or skip the step. It is to start now, in small, specific, high-leverage patches. Clean one dataset first: your GBP information. Clean the next: your review response history. Clean the next: your email list segmentation. Each cleaned dataset becomes a better input for AI tools. The compound effect is significant over eighteen months.
Your team's capability
The ability to choose the right tool for a specific task, prompt it effectively, review its outputs critically, and recognise when it is wrong. Not a technical skill. Professional judgment applied to new tools.
I have encountered organisations with expensive AI subscriptions getting worse results than others using free tools with a clear intent. The gap is almost always capability, not access. The team that knows what it wants from an AI tool, how to describe that clearly, and how to critique the output gets disproportionately more value than the team that simply has more tools. This is not unique to AI. It is the same pattern that appeared in the spreadsheet transition of the 1990s, the internet transition of the 2000s, and the mobile transition of the 2010s. The tools democratise access. Capability differentiates outcomes.
Capability builds in small groups and through practice. The ETC's 2025 research, covered in detail in Section B, confirms what I have seen in every organisation working on this: the path that works is not generic training. It is identifying the people on your team who are already experimenting with AI, giving them structured space to develop their skills, and asking them to teach two colleagues each over the following quarter. Peer-led learning, grounded in the team's actual work, is what moves an organisation. External training alone rarely does.
The European Commission's AI literacy obligation, effective February 2025 and discussed in Section A5, is a regulatory framing for what is, in any case, the right operational decision. Building team capability is both compliant and effective. Not building it is both non-compliant and ineffective.
Your content
The specificity of what you publish about your destination or business. Generic content is invisible to AI systems and unconvincing to humans. The operators who describe what they actually offer, in the language their visitors actually use, in detail that only someone who knows the place could provide, are the ones that get cited, recommended, and booked.
This is not a content volume problem. It is a content quality problem. A small hotel that publishes one honest, specific, beautifully written piece per month about a specific experience guests can have there, with photographs of the actual place and not stock images, will outperform a hotel publishing four templated blog posts per month optimised for keywords that nobody actually uses.
AI tools, it turns out, are good at detecting genericism. They can sense when content is about nothing in particular, written for no one in particular. They respond, directly in their training and indirectly in their reasoning, to specificity. A description that names the neighbourhood, the street, the season, the time of day, the specific dish, the specific view, is the description that AI tools pick up and humans trust.
Specificity is the one content quality that cannot be faked through scale. You can publish more generic content. You cannot publish more specific content without knowing your subject. This is why small operators with deep local knowledge have an advantage that scale cannot match, provided they use it.
These four things are in your hands. Start here.
"The operators asking 'when will AI be ready for tourism' are asking the wrong question. AI is ready. The question is whether your business is ready for AI."
Mirko Lalli, March 2026If you take nothing else from Section A, take this: the transition that is happening is not optional, and the agency you still have is concentrated in four specific, controllable places. The operators who move on those four will be better positioned at the end of 2026 than almost any competitor who is waiting for clarity. The clarity is not coming. The behaviour has already changed.